Test Report: KVM_Linux_crio 19598

                    
                      cb70ad94d69a229bf8d3511a5a00af396fa2386e:2024-09-10:36157
                    
                

Test fail (30/312)

Order failed test Duration
33 TestAddons/parallel/Registry 73.87
34 TestAddons/parallel/Ingress 151.8
36 TestAddons/parallel/MetricsServer 321.64
164 TestMultiControlPlane/serial/StopSecondaryNode 141.87
166 TestMultiControlPlane/serial/RestartSecondaryNode 50.44
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 389.72
171 TestMultiControlPlane/serial/StopCluster 141.72
231 TestMultiNode/serial/RestartKeepsNodes 327.49
233 TestMultiNode/serial/StopMultiNode 141.34
240 TestPreload 270.53
248 TestKubernetesUpgrade 454.54
276 TestPause/serial/SecondStartNoReconfiguration 57.86
314 TestStartStop/group/old-k8s-version/serial/FirstStart 300.4
339 TestStartStop/group/embed-certs/serial/Stop 139.15
343 TestStartStop/group/no-preload/serial/Stop 138.97
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.06
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
348 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 106.58
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/SecondStart 723.15
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.14
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.26
359 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.13
360 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.36
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 477.04
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 368.34
363 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 384.18
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 126.97
x
+
TestAddons/parallel/Registry (73.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.805032ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-6qxxb" [e9ac504f-2687-4fc9-bc82-285fcdbd1c77] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003966047s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dmz6w" [61812c3a-2248-430b-97e8-3b188671e0eb] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003432534s
addons_test.go:342: (dbg) Run:  kubectl --context addons-306463 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-306463 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-306463 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.09134203s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-306463 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 ip
2024/09/10 17:40:48 [DEBUG] GET http://192.168.39.144:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-306463 -n addons-306463
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-306463 logs -n 25: (1.312372339s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-545922 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p download-only-545922                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-545922                                                                     | download-only-545922 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | -o=json --download-only                                                                     | download-only-355146 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p download-only-355146                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-355146                                                                     | download-only-355146 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-545922                                                                     | download-only-545922 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-355146                                                                     | download-only-355146 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-896642 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | binary-mirror-896642                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42249                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-896642                                                                     | binary-mirror-896642 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-306463 --wait=true                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:39 UTC | 10 Sep 24 17:39 UTC |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:39 UTC | 10 Sep 24 17:39 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-306463 ssh cat                                                                       | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | /opt/local-path-provisioner/pvc-2be347cf-fef5-49ec-8123-8e3cf02ab859_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-306463 addons                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-306463 addons                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-306463 ip                                                                            | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:22.682209   13777 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:22.682460   13777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:22.682468   13777 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:22.682472   13777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:22.682675   13777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:29:22.683208   13777 out.go:352] Setting JSON to false
	I0910 17:29:22.683958   13777 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":715,"bootTime":1725988648,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:22.684008   13777 start.go:139] virtualization: kvm guest
	I0910 17:29:22.685971   13777 out.go:177] * [addons-306463] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:29:22.687151   13777 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:29:22.687158   13777 notify.go:220] Checking for updates...
	I0910 17:29:22.689304   13777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:22.690364   13777 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:29:22.691502   13777 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:22.692665   13777 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:29:22.693954   13777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:29:22.695291   13777 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:22.725551   13777 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 17:29:22.726685   13777 start.go:297] selected driver: kvm2
	I0910 17:29:22.726698   13777 start.go:901] validating driver "kvm2" against <nil>
	I0910 17:29:22.726711   13777 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:29:22.727613   13777 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:22.727695   13777 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:29:22.741833   13777 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:29:22.741873   13777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:22.742090   13777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:29:22.742162   13777 cni.go:84] Creating CNI manager for ""
	I0910 17:29:22.742176   13777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:29:22.742187   13777 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:29:22.742259   13777 start.go:340] cluster config:
	{Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:22.742373   13777 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:22.744027   13777 out.go:177] * Starting "addons-306463" primary control-plane node in "addons-306463" cluster
	I0910 17:29:22.745131   13777 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:29:22.745164   13777 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:29:22.745174   13777 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:22.745247   13777 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:29:22.745259   13777 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:29:22.745636   13777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/config.json ...
	I0910 17:29:22.745666   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/config.json: {Name:mka38f023b13d99d139d0b4b4731421fa1c9c222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:22.745821   13777 start.go:360] acquireMachinesLock for addons-306463: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:29:22.745879   13777 start.go:364] duration metric: took 40.358µs to acquireMachinesLock for "addons-306463"
	I0910 17:29:22.745902   13777 start.go:93] Provisioning new machine with config: &{Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:29:22.745979   13777 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 17:29:22.747590   13777 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0910 17:29:22.747699   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:29:22.747737   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:29:22.761242   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0910 17:29:22.761623   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:29:22.762084   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:29:22.762105   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:29:22.762416   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:29:22.762596   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:22.762723   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:22.762855   13777 start.go:159] libmachine.API.Create for "addons-306463" (driver="kvm2")
	I0910 17:29:22.762901   13777 client.go:168] LocalClient.Create starting
	I0910 17:29:22.762931   13777 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 17:29:22.824214   13777 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 17:29:23.021609   13777 main.go:141] libmachine: Running pre-create checks...
	I0910 17:29:23.021632   13777 main.go:141] libmachine: (addons-306463) Calling .PreCreateCheck
	I0910 17:29:23.022141   13777 main.go:141] libmachine: (addons-306463) Calling .GetConfigRaw
	I0910 17:29:23.022504   13777 main.go:141] libmachine: Creating machine...
	I0910 17:29:23.022515   13777 main.go:141] libmachine: (addons-306463) Calling .Create
	I0910 17:29:23.022671   13777 main.go:141] libmachine: (addons-306463) Creating KVM machine...
	I0910 17:29:23.023879   13777 main.go:141] libmachine: (addons-306463) DBG | found existing default KVM network
	I0910 17:29:23.024609   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.024461   13799 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0910 17:29:23.024628   13777 main.go:141] libmachine: (addons-306463) DBG | created network xml: 
	I0910 17:29:23.024641   13777 main.go:141] libmachine: (addons-306463) DBG | <network>
	I0910 17:29:23.024649   13777 main.go:141] libmachine: (addons-306463) DBG |   <name>mk-addons-306463</name>
	I0910 17:29:23.024662   13777 main.go:141] libmachine: (addons-306463) DBG |   <dns enable='no'/>
	I0910 17:29:23.024669   13777 main.go:141] libmachine: (addons-306463) DBG |   
	I0910 17:29:23.024682   13777 main.go:141] libmachine: (addons-306463) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0910 17:29:23.024693   13777 main.go:141] libmachine: (addons-306463) DBG |     <dhcp>
	I0910 17:29:23.024763   13777 main.go:141] libmachine: (addons-306463) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0910 17:29:23.024789   13777 main.go:141] libmachine: (addons-306463) DBG |     </dhcp>
	I0910 17:29:23.024803   13777 main.go:141] libmachine: (addons-306463) DBG |   </ip>
	I0910 17:29:23.024817   13777 main.go:141] libmachine: (addons-306463) DBG |   
	I0910 17:29:23.024828   13777 main.go:141] libmachine: (addons-306463) DBG | </network>
	I0910 17:29:23.024838   13777 main.go:141] libmachine: (addons-306463) DBG | 
	I0910 17:29:23.029807   13777 main.go:141] libmachine: (addons-306463) DBG | trying to create private KVM network mk-addons-306463 192.168.39.0/24...
	I0910 17:29:23.091118   13777 main.go:141] libmachine: (addons-306463) DBG | private KVM network mk-addons-306463 192.168.39.0/24 created
	I0910 17:29:23.091150   13777 main.go:141] libmachine: (addons-306463) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463 ...
	I0910 17:29:23.091164   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.091073   13799 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:23.091178   13777 main.go:141] libmachine: (addons-306463) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:29:23.091208   13777 main.go:141] libmachine: (addons-306463) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:29:23.339080   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.338953   13799 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa...
	I0910 17:29:23.548665   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.548540   13799 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/addons-306463.rawdisk...
	I0910 17:29:23.548703   13777 main.go:141] libmachine: (addons-306463) DBG | Writing magic tar header
	I0910 17:29:23.548717   13777 main.go:141] libmachine: (addons-306463) DBG | Writing SSH key tar header
	I0910 17:29:23.548730   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.548675   13799 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463 ...
	I0910 17:29:23.548788   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463
	I0910 17:29:23.548813   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463 (perms=drwx------)
	I0910 17:29:23.548826   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 17:29:23.548840   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:23.548846   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 17:29:23.548863   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:29:23.548876   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:29:23.548888   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home
	I0910 17:29:23.548904   13777 main.go:141] libmachine: (addons-306463) DBG | Skipping /home - not owner
	I0910 17:29:23.548918   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:29:23.548931   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 17:29:23.548942   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 17:29:23.548949   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:29:23.548957   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:29:23.548963   13777 main.go:141] libmachine: (addons-306463) Creating domain...
	I0910 17:29:23.549957   13777 main.go:141] libmachine: (addons-306463) define libvirt domain using xml: 
	I0910 17:29:23.549976   13777 main.go:141] libmachine: (addons-306463) <domain type='kvm'>
	I0910 17:29:23.549984   13777 main.go:141] libmachine: (addons-306463)   <name>addons-306463</name>
	I0910 17:29:23.549995   13777 main.go:141] libmachine: (addons-306463)   <memory unit='MiB'>4000</memory>
	I0910 17:29:23.550004   13777 main.go:141] libmachine: (addons-306463)   <vcpu>2</vcpu>
	I0910 17:29:23.550011   13777 main.go:141] libmachine: (addons-306463)   <features>
	I0910 17:29:23.550016   13777 main.go:141] libmachine: (addons-306463)     <acpi/>
	I0910 17:29:23.550023   13777 main.go:141] libmachine: (addons-306463)     <apic/>
	I0910 17:29:23.550027   13777 main.go:141] libmachine: (addons-306463)     <pae/>
	I0910 17:29:23.550031   13777 main.go:141] libmachine: (addons-306463)     
	I0910 17:29:23.550036   13777 main.go:141] libmachine: (addons-306463)   </features>
	I0910 17:29:23.550043   13777 main.go:141] libmachine: (addons-306463)   <cpu mode='host-passthrough'>
	I0910 17:29:23.550050   13777 main.go:141] libmachine: (addons-306463)   
	I0910 17:29:23.550064   13777 main.go:141] libmachine: (addons-306463)   </cpu>
	I0910 17:29:23.550074   13777 main.go:141] libmachine: (addons-306463)   <os>
	I0910 17:29:23.550087   13777 main.go:141] libmachine: (addons-306463)     <type>hvm</type>
	I0910 17:29:23.550095   13777 main.go:141] libmachine: (addons-306463)     <boot dev='cdrom'/>
	I0910 17:29:23.550103   13777 main.go:141] libmachine: (addons-306463)     <boot dev='hd'/>
	I0910 17:29:23.550108   13777 main.go:141] libmachine: (addons-306463)     <bootmenu enable='no'/>
	I0910 17:29:23.550121   13777 main.go:141] libmachine: (addons-306463)   </os>
	I0910 17:29:23.550139   13777 main.go:141] libmachine: (addons-306463)   <devices>
	I0910 17:29:23.550156   13777 main.go:141] libmachine: (addons-306463)     <disk type='file' device='cdrom'>
	I0910 17:29:23.550170   13777 main.go:141] libmachine: (addons-306463)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/boot2docker.iso'/>
	I0910 17:29:23.550179   13777 main.go:141] libmachine: (addons-306463)       <target dev='hdc' bus='scsi'/>
	I0910 17:29:23.550185   13777 main.go:141] libmachine: (addons-306463)       <readonly/>
	I0910 17:29:23.550191   13777 main.go:141] libmachine: (addons-306463)     </disk>
	I0910 17:29:23.550198   13777 main.go:141] libmachine: (addons-306463)     <disk type='file' device='disk'>
	I0910 17:29:23.550206   13777 main.go:141] libmachine: (addons-306463)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:29:23.550221   13777 main.go:141] libmachine: (addons-306463)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/addons-306463.rawdisk'/>
	I0910 17:29:23.550239   13777 main.go:141] libmachine: (addons-306463)       <target dev='hda' bus='virtio'/>
	I0910 17:29:23.550246   13777 main.go:141] libmachine: (addons-306463)     </disk>
	I0910 17:29:23.550252   13777 main.go:141] libmachine: (addons-306463)     <interface type='network'>
	I0910 17:29:23.550256   13777 main.go:141] libmachine: (addons-306463)       <source network='mk-addons-306463'/>
	I0910 17:29:23.550262   13777 main.go:141] libmachine: (addons-306463)       <model type='virtio'/>
	I0910 17:29:23.550268   13777 main.go:141] libmachine: (addons-306463)     </interface>
	I0910 17:29:23.550274   13777 main.go:141] libmachine: (addons-306463)     <interface type='network'>
	I0910 17:29:23.550285   13777 main.go:141] libmachine: (addons-306463)       <source network='default'/>
	I0910 17:29:23.550301   13777 main.go:141] libmachine: (addons-306463)       <model type='virtio'/>
	I0910 17:29:23.550316   13777 main.go:141] libmachine: (addons-306463)     </interface>
	I0910 17:29:23.550326   13777 main.go:141] libmachine: (addons-306463)     <serial type='pty'>
	I0910 17:29:23.550334   13777 main.go:141] libmachine: (addons-306463)       <target port='0'/>
	I0910 17:29:23.550339   13777 main.go:141] libmachine: (addons-306463)     </serial>
	I0910 17:29:23.550346   13777 main.go:141] libmachine: (addons-306463)     <console type='pty'>
	I0910 17:29:23.550352   13777 main.go:141] libmachine: (addons-306463)       <target type='serial' port='0'/>
	I0910 17:29:23.550358   13777 main.go:141] libmachine: (addons-306463)     </console>
	I0910 17:29:23.550364   13777 main.go:141] libmachine: (addons-306463)     <rng model='virtio'>
	I0910 17:29:23.550371   13777 main.go:141] libmachine: (addons-306463)       <backend model='random'>/dev/random</backend>
	I0910 17:29:23.550377   13777 main.go:141] libmachine: (addons-306463)     </rng>
	I0910 17:29:23.550386   13777 main.go:141] libmachine: (addons-306463)     
	I0910 17:29:23.550422   13777 main.go:141] libmachine: (addons-306463)     
	I0910 17:29:23.550446   13777 main.go:141] libmachine: (addons-306463)   </devices>
	I0910 17:29:23.550457   13777 main.go:141] libmachine: (addons-306463) </domain>
	I0910 17:29:23.550464   13777 main.go:141] libmachine: (addons-306463) 
	I0910 17:29:23.555556   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:8a:bf:af in network default
	I0910 17:29:23.556041   13777 main.go:141] libmachine: (addons-306463) Ensuring networks are active...
	I0910 17:29:23.556059   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:23.556675   13777 main.go:141] libmachine: (addons-306463) Ensuring network default is active
	I0910 17:29:23.556973   13777 main.go:141] libmachine: (addons-306463) Ensuring network mk-addons-306463 is active
	I0910 17:29:23.557522   13777 main.go:141] libmachine: (addons-306463) Getting domain xml...
	I0910 17:29:23.558190   13777 main.go:141] libmachine: (addons-306463) Creating domain...
	I0910 17:29:24.925718   13777 main.go:141] libmachine: (addons-306463) Waiting to get IP...
	I0910 17:29:24.926478   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:24.926843   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:24.926877   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:24.926829   13799 retry.go:31] will retry after 244.328706ms: waiting for machine to come up
	I0910 17:29:25.173225   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:25.173645   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:25.173677   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:25.173618   13799 retry.go:31] will retry after 349.863232ms: waiting for machine to come up
	I0910 17:29:25.525116   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:25.525527   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:25.525551   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:25.525492   13799 retry.go:31] will retry after 354.701071ms: waiting for machine to come up
	I0910 17:29:25.881916   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:25.882328   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:25.882350   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:25.882291   13799 retry.go:31] will retry after 411.881959ms: waiting for machine to come up
	I0910 17:29:26.296034   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:26.296469   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:26.296495   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:26.296414   13799 retry.go:31] will retry after 565.67781ms: waiting for machine to come up
	I0910 17:29:26.864221   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:26.864646   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:26.864669   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:26.864638   13799 retry.go:31] will retry after 573.622911ms: waiting for machine to come up
	I0910 17:29:27.439318   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:27.439758   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:27.439778   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:27.439737   13799 retry.go:31] will retry after 813.476344ms: waiting for machine to come up
	I0910 17:29:28.254405   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:28.254862   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:28.254883   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:28.254830   13799 retry.go:31] will retry after 1.15953408s: waiting for machine to come up
	I0910 17:29:29.416144   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:29.416582   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:29.416605   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:29.416548   13799 retry.go:31] will retry after 1.708147643s: waiting for machine to come up
	I0910 17:29:31.127436   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:31.127806   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:31.127832   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:31.127765   13799 retry.go:31] will retry after 2.290831953s: waiting for machine to come up
	I0910 17:29:33.419747   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:33.420078   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:33.420121   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:33.420025   13799 retry.go:31] will retry after 2.583428608s: waiting for machine to come up
	I0910 17:29:36.006176   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:36.006651   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:36.006676   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:36.006622   13799 retry.go:31] will retry after 2.503171234s: waiting for machine to come up
	I0910 17:29:38.511747   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:38.512087   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:38.512126   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:38.512062   13799 retry.go:31] will retry after 3.047981844s: waiting for machine to come up
	I0910 17:29:41.561167   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:41.561635   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:41.561661   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:41.561592   13799 retry.go:31] will retry after 5.416767796s: waiting for machine to come up
	I0910 17:29:46.982824   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:46.983201   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has current primary IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:46.983221   13777 main.go:141] libmachine: (addons-306463) Found IP for machine: 192.168.39.144
	I0910 17:29:46.983236   13777 main.go:141] libmachine: (addons-306463) Reserving static IP address...
	I0910 17:29:46.983568   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find host DHCP lease matching {name: "addons-306463", mac: "52:54:00:74:46:16", ip: "192.168.39.144"} in network mk-addons-306463
	I0910 17:29:47.052549   13777 main.go:141] libmachine: (addons-306463) DBG | Getting to WaitForSSH function...
	I0910 17:29:47.052583   13777 main.go:141] libmachine: (addons-306463) Reserved static IP address: 192.168.39.144
	I0910 17:29:47.052599   13777 main.go:141] libmachine: (addons-306463) Waiting for SSH to be available...
	I0910 17:29:47.055206   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.055721   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.055749   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.055768   13777 main.go:141] libmachine: (addons-306463) DBG | Using SSH client type: external
	I0910 17:29:47.055784   13777 main.go:141] libmachine: (addons-306463) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa (-rw-------)
	I0910 17:29:47.055817   13777 main.go:141] libmachine: (addons-306463) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:29:47.055833   13777 main.go:141] libmachine: (addons-306463) DBG | About to run SSH command:
	I0910 17:29:47.055847   13777 main.go:141] libmachine: (addons-306463) DBG | exit 0
	I0910 17:29:47.189212   13777 main.go:141] libmachine: (addons-306463) DBG | SSH cmd err, output: <nil>: 
	I0910 17:29:47.189498   13777 main.go:141] libmachine: (addons-306463) KVM machine creation complete!
	I0910 17:29:47.189774   13777 main.go:141] libmachine: (addons-306463) Calling .GetConfigRaw
	I0910 17:29:47.190322   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:47.190546   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:47.190703   13777 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:29:47.190718   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:29:47.191953   13777 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:29:47.191983   13777 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:29:47.191990   13777 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:29:47.192000   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.194176   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.194550   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.194580   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.194727   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.194890   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.195040   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.195167   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.195310   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.195466   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.195475   13777 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:29:47.296268   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:29:47.296287   13777 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:29:47.296294   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.298863   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.299207   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.299231   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.299390   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.299581   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.299710   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.299846   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.300038   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.300248   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.300264   13777 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:29:47.401977   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:29:47.402066   13777 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:29:47.402080   13777 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:29:47.402093   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:47.402339   13777 buildroot.go:166] provisioning hostname "addons-306463"
	I0910 17:29:47.402369   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:47.402589   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.404883   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.405227   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.405262   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.405351   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.405496   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.405637   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.405765   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.406035   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.406187   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.406198   13777 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-306463 && echo "addons-306463" | sudo tee /etc/hostname
	I0910 17:29:47.519126   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-306463
	
	I0910 17:29:47.519148   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.521835   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.522126   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.522165   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.522331   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.522503   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.522688   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.522820   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.522981   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.523132   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.523148   13777 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-306463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-306463/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-306463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:29:47.634728   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:29:47.634773   13777 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:29:47.634798   13777 buildroot.go:174] setting up certificates
	I0910 17:29:47.634811   13777 provision.go:84] configureAuth start
	I0910 17:29:47.634820   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:47.635082   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:47.637636   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.638056   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.638081   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.638266   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.640398   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.640703   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.640732   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.640867   13777 provision.go:143] copyHostCerts
	I0910 17:29:47.640932   13777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:29:47.641095   13777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:29:47.641166   13777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:29:47.641219   13777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.addons-306463 san=[127.0.0.1 192.168.39.144 addons-306463 localhost minikube]
	I0910 17:29:47.725425   13777 provision.go:177] copyRemoteCerts
	I0910 17:29:47.725479   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:29:47.725499   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.728270   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.728605   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.728635   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.728841   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.729028   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.729224   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.729412   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:47.812673   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:29:47.838502   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:29:47.861372   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 17:29:47.884280   13777 provision.go:87] duration metric: took 249.455962ms to configureAuth
	I0910 17:29:47.884302   13777 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:29:47.884440   13777 config.go:182] Loaded profile config "addons-306463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:29:47.884509   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.887000   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.887356   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.887385   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.887546   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.887712   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.887871   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.888039   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.888187   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.888352   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.888365   13777 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 17:29:48.228474   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 17:29:48.228497   13777 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:29:48.228507   13777 main.go:141] libmachine: (addons-306463) Calling .GetURL
	I0910 17:29:48.229870   13777 main.go:141] libmachine: (addons-306463) DBG | Using libvirt version 6000000
	I0910 17:29:48.232480   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.232820   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.232841   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.233000   13777 main.go:141] libmachine: Docker is up and running!
	I0910 17:29:48.233010   13777 main.go:141] libmachine: Reticulating splines...
	I0910 17:29:48.233016   13777 client.go:171] duration metric: took 25.470105424s to LocalClient.Create
	I0910 17:29:48.233036   13777 start.go:167] duration metric: took 25.470181661s to libmachine.API.Create "addons-306463"
	I0910 17:29:48.233049   13777 start.go:293] postStartSetup for "addons-306463" (driver="kvm2")
	I0910 17:29:48.233063   13777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:29:48.233098   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.233339   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:29:48.233365   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.235691   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.236027   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.236056   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.236234   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.236415   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.236578   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.236717   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:48.314956   13777 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:29:48.319200   13777 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:29:48.319217   13777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 17:29:48.319286   13777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 17:29:48.319313   13777 start.go:296] duration metric: took 86.256331ms for postStartSetup
	I0910 17:29:48.319357   13777 main.go:141] libmachine: (addons-306463) Calling .GetConfigRaw
	I0910 17:29:48.319875   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:48.322245   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.322628   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.322656   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.322871   13777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/config.json ...
	I0910 17:29:48.323037   13777 start.go:128] duration metric: took 25.577048673s to createHost
	I0910 17:29:48.323063   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.325320   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.325645   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.325671   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.325773   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.325947   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.326098   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.326209   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.326331   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:48.326533   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:48.326545   13777 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:29:48.425744   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725989388.402057522
	
	I0910 17:29:48.425768   13777 fix.go:216] guest clock: 1725989388.402057522
	I0910 17:29:48.425778   13777 fix.go:229] Guest: 2024-09-10 17:29:48.402057522 +0000 UTC Remote: 2024-09-10 17:29:48.323049297 +0000 UTC m=+25.672610756 (delta=79.008225ms)
	I0910 17:29:48.425835   13777 fix.go:200] guest clock delta is within tolerance: 79.008225ms
	I0910 17:29:48.425843   13777 start.go:83] releasing machines lock for "addons-306463", held for 25.679951591s
	I0910 17:29:48.425876   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.426150   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:48.428633   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.428887   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.428917   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.429038   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.429469   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.429618   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.429702   13777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:29:48.429752   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.429808   13777 ssh_runner.go:195] Run: cat /version.json
	I0910 17:29:48.429830   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.432215   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432477   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432509   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.432533   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432629   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.432809   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.432852   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.432885   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432948   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.433024   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.433123   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:48.433223   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.433357   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.433529   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:48.519560   13777 ssh_runner.go:195] Run: systemctl --version
	I0910 17:29:48.543890   13777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 17:29:48.713886   13777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:29:48.719987   13777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:29:48.720039   13777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:29:48.736004   13777 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:29:48.736022   13777 start.go:495] detecting cgroup driver to use...
	I0910 17:29:48.736067   13777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:29:48.752773   13777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:29:48.766717   13777 docker.go:217] disabling cri-docker service (if available) ...
	I0910 17:29:48.766772   13777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 17:29:48.780643   13777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 17:29:48.794503   13777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 17:29:48.918085   13777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 17:29:49.086620   13777 docker.go:233] disabling docker service ...
	I0910 17:29:49.086682   13777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 17:29:49.100274   13777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 17:29:49.112877   13777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 17:29:49.235428   13777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 17:29:49.349493   13777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 17:29:49.363676   13777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:29:49.381290   13777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 17:29:49.381345   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.391264   13777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 17:29:49.391322   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.401028   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.410592   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.420351   13777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:29:49.430171   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.439789   13777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.455759   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.465551   13777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:29:49.474306   13777 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 17:29:49.474354   13777 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 17:29:49.487232   13777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:29:49.496150   13777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:49.606336   13777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 17:29:49.695242   13777 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 17:29:49.695340   13777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 17:29:49.699902   13777 start.go:563] Will wait 60s for crictl version
	I0910 17:29:49.699961   13777 ssh_runner.go:195] Run: which crictl
	I0910 17:29:49.703479   13777 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:29:49.744817   13777 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 17:29:49.744937   13777 ssh_runner.go:195] Run: crio --version
	I0910 17:29:49.773082   13777 ssh_runner.go:195] Run: crio --version
	I0910 17:29:49.804181   13777 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 17:29:49.805563   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:49.808022   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:49.808405   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:49.808439   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:49.808624   13777 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:29:49.812736   13777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:29:49.825102   13777 kubeadm.go:883] updating cluster {Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 17:29:49.825212   13777 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:29:49.825256   13777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:29:49.856852   13777 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 17:29:49.856923   13777 ssh_runner.go:195] Run: which lz4
	I0910 17:29:49.860976   13777 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 17:29:49.865045   13777 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 17:29:49.865078   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 17:29:51.093518   13777 crio.go:462] duration metric: took 1.232563952s to copy over tarball
	I0910 17:29:51.093585   13777 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 17:29:53.221638   13777 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.128025242s)
	I0910 17:29:53.221664   13777 crio.go:469] duration metric: took 2.128123943s to extract the tarball
	I0910 17:29:53.221671   13777 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 17:29:53.258544   13777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:29:53.300100   13777 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 17:29:53.300128   13777 cache_images.go:84] Images are preloaded, skipping loading
	I0910 17:29:53.300138   13777 kubeadm.go:934] updating node { 192.168.39.144 8443 v1.31.0 crio true true} ...
	I0910 17:29:53.300253   13777 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-306463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:29:53.300317   13777 ssh_runner.go:195] Run: crio config
	I0910 17:29:53.353856   13777 cni.go:84] Creating CNI manager for ""
	I0910 17:29:53.353875   13777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:29:53.353885   13777 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 17:29:53.353905   13777 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-306463 NodeName:addons-306463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 17:29:53.354032   13777 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-306463"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 17:29:53.354084   13777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:29:53.364093   13777 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 17:29:53.364159   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 17:29:53.373663   13777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0910 17:29:53.391325   13777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:29:53.408601   13777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0910 17:29:53.428267   13777 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0910 17:29:53.432004   13777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:29:53.443494   13777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:53.565386   13777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:29:53.582101   13777 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463 for IP: 192.168.39.144
	I0910 17:29:53.582140   13777 certs.go:194] generating shared ca certs ...
	I0910 17:29:53.582161   13777 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:53.582320   13777 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 17:29:53.851863   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt ...
	I0910 17:29:53.851887   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt: {Name:mk391b947a0b07d47c3f48605c2169ac6bbd02dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:53.852030   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key ...
	I0910 17:29:53.852040   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key: {Name:mke85b1ed3e4a8e9bbc933ab9200470c82fbf9f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:53.852110   13777 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 17:29:54.025549   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt ...
	I0910 17:29:54.025576   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt: {Name:mkba6d1cf3fb11e6bd8f0b60294ec684bf33d7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.025720   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key ...
	I0910 17:29:54.025730   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key: {Name:mke1e40be102cd0ea85ebf8e9804fe7294de9b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.025806   13777 certs.go:256] generating profile certs ...
	I0910 17:29:54.025854   13777 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.key
	I0910 17:29:54.025873   13777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt with IP's: []
	I0910 17:29:54.256975   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt ...
	I0910 17:29:54.257001   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: {Name:mkddd504fb642c11276cd07fd6115fe4786a05eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.257158   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.key ...
	I0910 17:29:54.257169   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.key: {Name:mkd6342dd54701d46a2aa87d79fc772b251c8012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.257264   13777 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e
	I0910 17:29:54.257283   13777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.144]
	I0910 17:29:54.390720   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e ...
	I0910 17:29:54.390752   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e: {Name:mkef82fca0b89b824a8a6247fbc2d43a96f4692c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.390921   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e ...
	I0910 17:29:54.390940   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e: {Name:mk548882b9e102cf63bf5a2676b5044c14781eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.391030   13777 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt
	I0910 17:29:54.391118   13777 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key
	I0910 17:29:54.391182   13777 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key
	I0910 17:29:54.391204   13777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt with IP's: []
	I0910 17:29:54.752265   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt ...
	I0910 17:29:54.752292   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt: {Name:mkc361744979bc8404f5a5aaa8788af34523a213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.752452   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key ...
	I0910 17:29:54.752468   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key: {Name:mkcded4c85166d07f3f2b1b8ff068b03a9d76311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.752681   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 17:29:54.752717   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:29:54.752753   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:29:54.752785   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 17:29:54.753440   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:29:54.779118   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:29:54.803026   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:29:54.825435   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 17:29:54.848031   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0910 17:29:54.872008   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 17:29:54.897479   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:29:54.922879   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:29:54.947831   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:29:54.974722   13777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 17:29:54.994110   13777 ssh_runner.go:195] Run: openssl version
	I0910 17:29:55.000395   13777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:29:55.013767   13777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:55.018473   13777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:55.018531   13777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:55.024792   13777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:29:55.035682   13777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:29:55.039752   13777 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:29:55.039807   13777 kubeadm.go:392] StartCluster: {Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:55.039892   13777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 17:29:55.039955   13777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 17:29:55.094283   13777 cri.go:89] found id: ""
	I0910 17:29:55.094342   13777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 17:29:55.112402   13777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 17:29:55.123314   13777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 17:29:55.135689   13777 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 17:29:55.135707   13777 kubeadm.go:157] found existing configuration files:
	
	I0910 17:29:55.135753   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 17:29:55.144757   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 17:29:55.144811   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 17:29:55.154051   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 17:29:55.162743   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 17:29:55.162794   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 17:29:55.171799   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 17:29:55.180529   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 17:29:55.180583   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 17:29:55.191873   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 17:29:55.200886   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 17:29:55.200937   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 17:29:55.210181   13777 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 17:29:55.258814   13777 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 17:29:55.258968   13777 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 17:29:55.371415   13777 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 17:29:55.371545   13777 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 17:29:55.371669   13777 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 17:29:55.384083   13777 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 17:29:55.408465   13777 out.go:235]   - Generating certificates and keys ...
	I0910 17:29:55.408589   13777 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 17:29:55.408665   13777 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 17:29:55.897673   13777 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 17:29:56.059223   13777 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 17:29:56.278032   13777 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 17:29:56.441145   13777 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 17:29:56.605793   13777 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 17:29:56.605947   13777 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-306463 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0910 17:29:56.790976   13777 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 17:29:56.791214   13777 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-306463 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0910 17:29:56.836139   13777 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 17:29:57.046320   13777 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 17:29:57.222692   13777 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 17:29:57.222801   13777 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 17:29:57.462021   13777 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 17:29:57.829972   13777 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 17:29:57.954467   13777 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 17:29:58.166081   13777 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 17:29:58.224456   13777 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 17:29:58.224997   13777 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 17:29:58.227323   13777 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 17:29:58.229164   13777 out.go:235]   - Booting up control plane ...
	I0910 17:29:58.229261   13777 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 17:29:58.229329   13777 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 17:29:58.229426   13777 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 17:29:58.245412   13777 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 17:29:58.251271   13777 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 17:29:58.251364   13777 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 17:29:58.388887   13777 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 17:29:58.389039   13777 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 17:29:58.890585   13777 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.078984ms
	I0910 17:29:58.890687   13777 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 17:30:03.392681   13777 kubeadm.go:310] [api-check] The API server is healthy after 4.502932782s
	I0910 17:30:03.406115   13777 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 17:30:03.420124   13777 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 17:30:03.449395   13777 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 17:30:03.449667   13777 kubeadm.go:310] [mark-control-plane] Marking the node addons-306463 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 17:30:03.460309   13777 kubeadm.go:310] [bootstrap-token] Using token: 457t84.d2zxow5i3fyaif8g
	I0910 17:30:03.461609   13777 out.go:235]   - Configuring RBAC rules ...
	I0910 17:30:03.461716   13777 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 17:30:03.465462   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 17:30:03.474356   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 17:30:03.477241   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 17:30:03.483988   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 17:30:03.489715   13777 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 17:30:03.799075   13777 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 17:30:04.227910   13777 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 17:30:04.798072   13777 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 17:30:04.798097   13777 kubeadm.go:310] 
	I0910 17:30:04.798189   13777 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 17:30:04.798211   13777 kubeadm.go:310] 
	I0910 17:30:04.798306   13777 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 17:30:04.798317   13777 kubeadm.go:310] 
	I0910 17:30:04.798366   13777 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 17:30:04.798449   13777 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 17:30:04.798534   13777 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 17:30:04.798547   13777 kubeadm.go:310] 
	I0910 17:30:04.798615   13777 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 17:30:04.798626   13777 kubeadm.go:310] 
	I0910 17:30:04.798664   13777 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 17:30:04.798671   13777 kubeadm.go:310] 
	I0910 17:30:04.798731   13777 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 17:30:04.798795   13777 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 17:30:04.798868   13777 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 17:30:04.798878   13777 kubeadm.go:310] 
	I0910 17:30:04.798966   13777 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 17:30:04.799060   13777 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 17:30:04.799070   13777 kubeadm.go:310] 
	I0910 17:30:04.799182   13777 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 457t84.d2zxow5i3fyaif8g \
	I0910 17:30:04.799300   13777 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 17:30:04.799341   13777 kubeadm.go:310] 	--control-plane 
	I0910 17:30:04.799355   13777 kubeadm.go:310] 
	I0910 17:30:04.799468   13777 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 17:30:04.799478   13777 kubeadm.go:310] 
	I0910 17:30:04.799599   13777 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 457t84.d2zxow5i3fyaif8g \
	I0910 17:30:04.799726   13777 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 17:30:04.800658   13777 kubeadm.go:310] W0910 17:29:55.239705     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:04.800920   13777 kubeadm.go:310] W0910 17:29:55.240584     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:04.801008   13777 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 17:30:04.801028   13777 cni.go:84] Creating CNI manager for ""
	I0910 17:30:04.801040   13777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:30:04.802881   13777 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 17:30:04.804227   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 17:30:04.816674   13777 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 17:30:04.835609   13777 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 17:30:04.835737   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:04.835739   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-306463 minikube.k8s.io/updated_at=2024_09_10T17_30_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=addons-306463 minikube.k8s.io/primary=true
	I0910 17:30:04.865385   13777 ops.go:34] apiserver oom_adj: -16
	I0910 17:30:04.960966   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:05.461285   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:05.961804   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:06.461686   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:06.961554   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:07.461362   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:07.961164   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:08.461339   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:08.961327   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:09.461036   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:09.564661   13777 kubeadm.go:1113] duration metric: took 4.728972481s to wait for elevateKubeSystemPrivileges
	I0910 17:30:09.564692   13777 kubeadm.go:394] duration metric: took 14.524892016s to StartCluster
	I0910 17:30:09.564710   13777 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.564844   13777 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:30:09.565243   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.565462   13777 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:30:09.565495   13777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 17:30:09.565538   13777 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0910 17:30:09.565627   13777 addons.go:69] Setting cloud-spanner=true in profile "addons-306463"
	I0910 17:30:09.565651   13777 addons.go:69] Setting yakd=true in profile "addons-306463"
	I0910 17:30:09.565662   13777 addons.go:234] Setting addon cloud-spanner=true in "addons-306463"
	I0910 17:30:09.565655   13777 addons.go:69] Setting inspektor-gadget=true in profile "addons-306463"
	I0910 17:30:09.565675   13777 addons.go:234] Setting addon yakd=true in "addons-306463"
	I0910 17:30:09.565670   13777 addons.go:69] Setting gcp-auth=true in profile "addons-306463"
	I0910 17:30:09.565685   13777 addons.go:234] Setting addon inspektor-gadget=true in "addons-306463"
	I0910 17:30:09.565692   13777 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-306463"
	I0910 17:30:09.565703   13777 mustload.go:65] Loading cluster: addons-306463
	I0910 17:30:09.565700   13777 config.go:182] Loaded profile config "addons-306463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:30:09.565711   13777 addons.go:69] Setting metrics-server=true in profile "addons-306463"
	I0910 17:30:09.565715   13777 addons.go:69] Setting helm-tiller=true in profile "addons-306463"
	I0910 17:30:09.565720   13777 addons.go:69] Setting storage-provisioner=true in profile "addons-306463"
	I0910 17:30:09.565734   13777 addons.go:234] Setting addon metrics-server=true in "addons-306463"
	I0910 17:30:09.565738   13777 addons.go:234] Setting addon storage-provisioner=true in "addons-306463"
	I0910 17:30:09.565740   13777 addons.go:69] Setting ingress=true in profile "addons-306463"
	I0910 17:30:09.565740   13777 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-306463"
	I0910 17:30:09.565753   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565754   13777 addons.go:69] Setting volcano=true in profile "addons-306463"
	I0910 17:30:09.565760   13777 addons.go:69] Setting ingress-dns=true in profile "addons-306463"
	I0910 17:30:09.565765   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565765   13777 addons.go:69] Setting registry=true in profile "addons-306463"
	I0910 17:30:09.565776   13777 addons.go:234] Setting addon volcano=true in "addons-306463"
	I0910 17:30:09.565760   13777 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-306463"
	I0910 17:30:09.565783   13777 addons.go:234] Setting addon registry=true in "addons-306463"
	I0910 17:30:09.565793   13777 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-306463"
	I0910 17:30:09.565801   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565809   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565810   13777 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-306463"
	I0910 17:30:09.565834   13777 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-306463"
	I0910 17:30:09.565735   13777 addons.go:234] Setting addon helm-tiller=true in "addons-306463"
	I0910 17:30:09.565889   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565897   13777 config.go:182] Loaded profile config "addons-306463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:30:09.565777   13777 addons.go:234] Setting addon ingress-dns=true in "addons-306463"
	I0910 17:30:09.566180   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566186   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566191   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566190   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566210   13777 addons.go:69] Setting default-storageclass=true in profile "addons-306463"
	I0910 17:30:09.566212   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566220   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566224   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566226   13777 addons.go:69] Setting volumesnapshots=true in profile "addons-306463"
	I0910 17:30:09.565707   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565756   13777 addons.go:234] Setting addon ingress=true in "addons-306463"
	I0910 17:30:09.566246   13777 addons.go:234] Setting addon volumesnapshots=true in "addons-306463"
	I0910 17:30:09.565705   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565801   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566276   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566214   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566431   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.565756   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566494   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566227   13777 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-306463"
	I0910 17:30:09.565709   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566515   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566518   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566594   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566617   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566232   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566712   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566737   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566765   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566781   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566800   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566821   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566831   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566843   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566802   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566880   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566882   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566891   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566902   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566910   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566935   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.567017   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.567048   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.567756   13777 out.go:177] * Verifying Kubernetes components...
	I0910 17:30:09.569434   13777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:09.582777   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0910 17:30:09.589426   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.589457   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.589941   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.591066   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.591086   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.593346   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.593990   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.594031   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.614952   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0910 17:30:09.615511   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.616077   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.616100   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.625500   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.626139   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.626180   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.626663   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0910 17:30:09.627167   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.627742   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.627760   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.628137   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.628731   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.628754   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.628942   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I0910 17:30:09.629508   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.629998   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.630014   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.630491   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.631027   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.631063   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.631232   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33169
	I0910 17:30:09.631984   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.632597   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.632614   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.633144   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0910 17:30:09.633568   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.634036   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.634051   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.634409   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.634947   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.634984   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.635276   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.635474   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.639823   13777 addons.go:234] Setting addon default-storageclass=true in "addons-306463"
	I0910 17:30:09.639870   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.640208   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.640228   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.649585   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0910 17:30:09.650122   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.650724   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.650742   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.651106   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.651353   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43517
	I0910 17:30:09.651675   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.651705   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.651834   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.652091   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0910 17:30:09.652330   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.652346   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.652505   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.653024   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.653041   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.653481   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I0910 17:30:09.653910   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.654114   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.654913   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32845
	I0910 17:30:09.655435   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.655964   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.655981   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.656044   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.656117   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.656812   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.656832   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.657418   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.657493   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0910 17:30:09.657907   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.658557   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.658600   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.658821   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0910 17:30:09.659275   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.659751   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.659768   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.660535   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.660593   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0910 17:30:09.661560   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.661593   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.661831   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.661907   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.662410   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.662439   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.662442   13777 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0910 17:30:09.662415   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.662611   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.662676   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0910 17:30:09.662687   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.663387   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.663450   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.663526   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.663886   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.664005   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.664015   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.664124   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.664133   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.664307   13777 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:09.664322   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0910 17:30:09.664338   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.664427   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.664960   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.665000   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.665625   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.665808   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.666537   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.666894   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.666927   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.667412   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0910 17:30:09.667675   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.668696   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.669275   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.669291   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.669343   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.669546   13777 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0910 17:30:09.670692   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 17:30:09.670708   13777 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 17:30:09.670727   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.670952   13777 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-306463"
	I0910 17:30:09.670991   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.671783   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.671816   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.672717   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.673017   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.673445   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.673492   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.673650   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.673854   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.674003   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.676862   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.676873   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0910 17:30:09.676918   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0910 17:30:09.676994   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.677003   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.677025   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.677041   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.677261   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.677376   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.677625   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.677718   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.678066   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44557
	I0910 17:30:09.678469   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.678717   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.678737   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.678906   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.678926   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.679232   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.679271   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.679735   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.679770   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.679844   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.679855   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.680043   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.680698   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.681570   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.681611   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.681815   13777 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0910 17:30:09.681916   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.682688   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.682726   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.683190   13777 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:09.683203   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0910 17:30:09.683218   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.686842   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.687460   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.687482   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.687670   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.687848   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.688024   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.688177   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.694726   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0910 17:30:09.695273   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.695643   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36063
	I0910 17:30:09.696099   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.696281   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.696293   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.696679   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.696746   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
	I0910 17:30:09.696887   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.698037   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.698762   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.698922   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.698941   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.699119   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.699136   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.699179   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0910 17:30:09.699522   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.699585   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.699840   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.700601   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.700644   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.700874   13777 out.go:177]   - Using image docker.io/registry:2.8.3
	I0910 17:30:09.700998   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.701016   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I0910 17:30:09.701360   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.701612   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I0910 17:30:09.701832   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.701844   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.702101   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.702118   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.702224   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.702441   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.703052   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.703125   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.703591   13777 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0910 17:30:09.704094   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.704109   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.704260   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0910 17:30:09.704704   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.704740   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.704775   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.705063   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0910 17:30:09.705196   13777 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 17:30:09.705211   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0910 17:30:09.705219   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0910 17:30:09.705226   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.705196   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.705342   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.706377   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.706400   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 17:30:09.706411   13777 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0910 17:30:09.706426   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.706440   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.706471   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.706482   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.707075   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.707216   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.707235   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.707300   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.707366   13777 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0910 17:30:09.707624   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.707822   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.708675   13777 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:09.708690   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0910 17:30:09.708705   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.712661   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713131   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.713163   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713366   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.713421   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.713480   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713861   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:09.713873   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:09.713918   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.713956   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713983   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.714002   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.714031   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.714206   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.714247   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.714468   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:09.714499   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0910 17:30:09.714592   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:09.714604   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:09.714613   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:09.714627   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:09.714682   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.714871   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.714961   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.714997   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.715045   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.715064   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.715156   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.715206   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.715419   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.715432   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.715492   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:09.715508   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:09.715557   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	W0910 17:30:09.715586   13777 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0910 17:30:09.715674   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.715712   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.715796   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.716017   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.716559   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:09.716638   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0910 17:30:09.717659   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.717965   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.718259   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.719379   13777 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0910 17:30:09.719428   13777 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 17:30:09.719443   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0910 17:30:09.719454   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0910 17:30:09.720905   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I0910 17:30:09.721013   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0910 17:30:09.721027   13777 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0910 17:30:09.721044   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.721066   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0910 17:30:09.721206   13777 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:09.721216   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 17:30:09.721229   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.721849   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.722165   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.722359   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.722466   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.722470   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0910 17:30:09.722708   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:09.722753   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0910 17:30:09.723597   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.723648   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.723855   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.724282   13777 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:09.724307   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0910 17:30:09.724324   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.724525   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.725165   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0910 17:30:09.725201   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.725218   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.725561   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.726077   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.726104   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.726140   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.726215   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.726601   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.726630   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.726642   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.726678   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.726725   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.726825   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.727007   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.727185   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.727319   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.727446   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.727475   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.727554   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0910 17:30:09.727608   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.727780   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.728076   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.728343   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.728947   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.729258   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.729880   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.729952   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.730000   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.730827   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0910 17:30:09.731231   13777 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0910 17:30:09.731583   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
	I0910 17:30:09.731692   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.732073   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.732112   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.732762   13777 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0910 17:30:09.732777   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0910 17:30:09.732794   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.733213   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.733241   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.733392   13777 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0910 17:30:09.733608   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.733837   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0910 17:30:09.733864   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.734595   13777 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0910 17:30:09.733877   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.734613   13777 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0910 17:30:09.734632   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.734774   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.736617   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0910 17:30:09.737387   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.737645   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.737692   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 17:30:09.737715   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0910 17:30:09.737739   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.737924   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.737974   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.738098   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.738264   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.738435   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.738443   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.738478   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.738597   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.738607   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.738839   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.738982   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.739120   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.740323   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
	I0910 17:30:09.740652   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.740693   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.741101   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.741129   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.741227   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.741442   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.741462   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.741464   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.741593   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.741743   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.741743   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.741915   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.743141   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.743345   13777 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:09.743359   13777 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 17:30:09.743372   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.746708   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.746740   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.746763   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.746782   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.746853   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.746981   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.747118   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	W0910 17:30:09.748150   13777 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56800->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.748170   13777 retry.go:31] will retry after 285.141352ms: ssh: handshake failed: read tcp 192.168.39.1:56800->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.753685   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I0910 17:30:09.753988   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.754407   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.754424   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.754715   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.754955   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.756271   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.758237   13777 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0910 17:30:09.759829   13777 out.go:177]   - Using image docker.io/busybox:stable
	I0910 17:30:09.761821   13777 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:09.761840   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0910 17:30:09.761857   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.764453   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.764819   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.764843   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.764947   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.765134   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.765249   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.765359   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	W0910 17:30:09.765990   13777 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56802->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.766007   13777 retry.go:31] will retry after 202.018394ms: ssh: handshake failed: read tcp 192.168.39.1:56802->192.168.39.144:22: read: connection reset by peer
	W0910 17:30:09.969022   13777 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56808->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.969051   13777 retry.go:31] will retry after 235.947645ms: ssh: handshake failed: read tcp 192.168.39.1:56808->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:10.094763   13777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:30:10.094906   13777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 17:30:10.122256   13777 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 17:30:10.122278   13777 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0910 17:30:10.186667   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:10.191366   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:10.193981   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 17:30:10.193996   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0910 17:30:10.259618   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:10.270667   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0910 17:30:10.270685   13777 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0910 17:30:10.276555   13777 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0910 17:30:10.276571   13777 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0910 17:30:10.310365   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 17:30:10.310384   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0910 17:30:10.315555   13777 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 17:30:10.315573   13777 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0910 17:30:10.352407   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:10.369092   13777 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 17:30:10.369117   13777 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0910 17:30:10.381559   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:10.401157   13777 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0910 17:30:10.401178   13777 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0910 17:30:10.403491   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 17:30:10.403515   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0910 17:30:10.472910   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0910 17:30:10.472930   13777 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0910 17:30:10.489850   13777 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:10.489869   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0910 17:30:10.511021   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:10.534214   13777 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:10.534238   13777 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0910 17:30:10.554150   13777 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0910 17:30:10.554167   13777 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0910 17:30:10.557521   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 17:30:10.557543   13777 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 17:30:10.572746   13777 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 17:30:10.572764   13777 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0910 17:30:10.573994   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 17:30:10.574011   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0910 17:30:10.704085   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0910 17:30:10.704110   13777 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0910 17:30:10.727766   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:10.747348   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:10.747374   13777 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 17:30:10.763336   13777 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 17:30:10.763355   13777 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0910 17:30:10.766511   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:10.774570   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 17:30:10.774593   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0910 17:30:10.782428   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 17:30:10.782444   13777 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0910 17:30:10.809598   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:11.063857   13777 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 17:30:11.063892   13777 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0910 17:30:11.074085   13777 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:11.074112   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0910 17:30:11.088999   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 17:30:11.089024   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0910 17:30:11.100617   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:11.112993   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:11.113018   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0910 17:30:11.298472   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 17:30:11.298502   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0910 17:30:11.316663   13777 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 17:30:11.316693   13777 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0910 17:30:11.369539   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:11.383347   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:11.653526   13777 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0910 17:30:11.653554   13777 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0910 17:30:11.678871   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 17:30:11.678895   13777 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0910 17:30:11.862075   13777 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:11.862095   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0910 17:30:11.921871   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 17:30:11.921897   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0910 17:30:12.123524   13777 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.028712837s)
	I0910 17:30:12.123546   13777 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.02861212s)
	I0910 17:30:12.123568   13777 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0910 17:30:12.138011   13777 node_ready.go:35] waiting up to 6m0s for node "addons-306463" to be "Ready" ...
	I0910 17:30:12.143070   13777 node_ready.go:49] node "addons-306463" has status "Ready":"True"
	I0910 17:30:12.143098   13777 node_ready.go:38] duration metric: took 5.040837ms for node "addons-306463" to be "Ready" ...
	I0910 17:30:12.143109   13777 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:12.155112   13777 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:12.301578   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 17:30:12.301604   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0910 17:30:12.345205   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:12.640873   13777 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-306463" context rescaled to 1 replicas
	I0910 17:30:12.648121   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:12.648142   13777 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0910 17:30:13.153205   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:13.916729   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.73001943s)
	I0910 17:30:13.916745   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.725354593s)
	I0910 17:30:13.916787   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.916800   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.916812   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.916818   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.657160792s)
	I0910 17:30:13.916832   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.916840   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.916849   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.917138   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917155   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917164   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.917162   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:13.917172   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.917292   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:13.917292   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917312   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917321   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.917329   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.917336   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917347   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917419   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:13.917426   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917458   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917492   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.917516   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.919078   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.919092   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.919112   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.919122   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.919092   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:14.275505   13777 pod_ready.go:103] pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:14.583313   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.230869529s)
	I0910 17:30:14.583362   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:14.583374   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:14.583656   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:14.583673   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:14.583683   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:14.583691   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:14.583884   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:14.583898   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:16.178328   13777 pod_ready.go:93] pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:16.178361   13777 pod_ready.go:82] duration metric: took 4.02322283s for pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.178376   13777 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.744986   13777 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0910 17:30:16.745032   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:16.748322   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:16.748729   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:16.748755   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:16.748928   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:16.749117   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:16.749277   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:16.749413   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:16.985599   13777 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0910 17:30:17.019642   13777 addons.go:234] Setting addon gcp-auth=true in "addons-306463"
	I0910 17:30:17.019684   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:17.020002   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:17.020027   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:17.035756   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41545
	I0910 17:30:17.036129   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:17.036614   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:17.036638   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:17.036957   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:17.037567   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:17.037606   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:17.052624   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I0910 17:30:17.053092   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:17.053555   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:17.053575   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:17.053874   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:17.054058   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:17.055568   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:17.055797   13777 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0910 17:30:17.055824   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:17.058347   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:17.058720   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:17.058755   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:17.058878   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:17.059056   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:17.059232   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:17.059408   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:18.294928   13777 pod_ready.go:103] pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:18.793144   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.411553079s)
	I0910 17:30:18.793145   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.282095983s)
	I0910 17:30:18.793236   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.065445297s)
	I0910 17:30:18.793270   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793187   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793285   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793310   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793340   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.026800859s)
	I0910 17:30:18.793371   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793387   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793269   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793447   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793468   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.692800645s)
	I0910 17:30:18.793374   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.983746942s)
	I0910 17:30:18.793499   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793508   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793513   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793517   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793601   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.424038762s)
	I0910 17:30:18.793624   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793633   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793677   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.793701   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.793737   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793764   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793796   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.410424596s)
	W0910 17:30:18.793833   13777 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:18.793860   13777 retry.go:31] will retry after 281.684636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:18.793941   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.448707771s)
	I0910 17:30:18.793961   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793971   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.794043   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.794051   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.794058   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.794066   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795483   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795531   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795547   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795569   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795575   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795583   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795590   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795649   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795657   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795658   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795665   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795672   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795682   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795689   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795696   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795703   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795713   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795732   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795744   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795751   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795757   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795762   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795771   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795781   13777 addons.go:475] Verifying addon ingress=true in "addons-306463"
	I0910 17:30:18.795793   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795812   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795818   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795824   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795830   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795884   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795900   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795908   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795914   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795971   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796000   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796018   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796031   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796038   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796047   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796055   13777 addons.go:475] Verifying addon metrics-server=true in "addons-306463"
	I0910 17:30:18.796152   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796021   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796451   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796481   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796495   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796938   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796966   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796973   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796992   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.797004   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.797213   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.797217   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.797239   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.797246   13777 addons.go:475] Verifying addon registry=true in "addons-306463"
	I0910 17:30:18.795865   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.798742   13777 out.go:177] * Verifying ingress addon...
	I0910 17:30:18.799682   13777 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-306463 service yakd-dashboard -n yakd-dashboard
	
	I0910 17:30:18.799716   13777 out.go:177] * Verifying registry addon...
	I0910 17:30:18.801342   13777 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0910 17:30:18.802106   13777 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0910 17:30:18.809767   13777 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0910 17:30:18.809787   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:18.811444   13777 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0910 17:30:18.811469   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:18.826959   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.826981   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.827246   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.827267   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	W0910 17:30:18.827341   13777 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0910 17:30:18.834146   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.834161   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.834395   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.834415   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.834429   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:19.076009   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:19.326915   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:19.327040   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:19.615946   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.462685919s)
	I0910 17:30:19.616011   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:19.616033   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:19.615967   13777 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.560143893s)
	I0910 17:30:19.616447   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:19.616479   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:19.616503   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:19.616512   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:19.616521   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:19.616744   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:19.616759   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:19.616776   13777 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-306463"
	I0910 17:30:19.617622   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:19.618428   13777 out.go:177] * Verifying csi-hostpath-driver addon...
	I0910 17:30:19.620045   13777 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0910 17:30:19.621038   13777 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 17:30:19.621222   13777 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 17:30:19.621237   13777 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0910 17:30:19.662236   13777 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 17:30:19.662270   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:19.722439   13777 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 17:30:19.722462   13777 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0910 17:30:19.763288   13777 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:19.763308   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0910 17:30:19.814766   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:19.815036   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:19.834549   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:20.128981   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:20.307489   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:20.307877   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:20.625102   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:20.683791   13777 pod_ready.go:103] pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:20.806684   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:20.806816   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:20.823709   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.747658678s)
	I0910 17:30:20.823758   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:20.823770   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:20.824016   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:20.824031   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:20.824040   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:20.824048   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:20.824246   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:20.824312   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:20.824334   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:21.152748   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:21.258310   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.423679033s)
	I0910 17:30:21.258353   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:21.258363   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:21.258652   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:21.258672   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:21.258675   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:21.258682   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:21.258781   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:21.259002   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:21.259047   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:21.259050   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:21.261123   13777 addons.go:475] Verifying addon gcp-auth=true in "addons-306463"
	I0910 17:30:21.262702   13777 out.go:177] * Verifying gcp-auth addon...
	I0910 17:30:21.265139   13777 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0910 17:30:21.309290   13777 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:30:21.309307   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:21.386582   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:21.386884   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:21.629140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:21.686431   13777 pod_ready.go:98] pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.144 HostIPs:[{IP:192.168.39
.144}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-10 17:30:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:13 +0000 UTC,FinishedAt:2024-09-10 17:30:19 +0000 UTC,ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa Started:0xc00269b790 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d3ebf0} {Name:kube-api-access-vvw44 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001d3ec10}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:21.686462   13777 pod_ready.go:82] duration metric: took 5.508078868s for pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace to be "Ready" ...
	E0910 17:30:21.686473   13777 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.144 HostIPs:[{IP:192.168.39.144}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-10 17:30:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:13 +0000 UTC,FinishedAt:2024-09-10 17:30:19 +0000 UTC,ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa Started:0xc00269b790 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d3ebf0} {Name:kube-api-access-vvw44 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001d3ec10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:21.686485   13777 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.694377   13777 pod_ready.go:93] pod "etcd-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.694399   13777 pod_ready.go:82] duration metric: took 7.904964ms for pod "etcd-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.694410   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.699906   13777 pod_ready.go:93] pod "kube-apiserver-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.699925   13777 pod_ready.go:82] duration metric: took 5.506518ms for pod "kube-apiserver-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.699935   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.706491   13777 pod_ready.go:93] pod "kube-controller-manager-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.706508   13777 pod_ready.go:82] duration metric: took 6.56701ms for pod "kube-controller-manager-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.706517   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-js72f" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.711913   13777 pod_ready.go:93] pod "kube-proxy-js72f" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.711927   13777 pod_ready.go:82] duration metric: took 5.405396ms for pod "kube-proxy-js72f" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.711934   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.771105   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:21.806408   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:21.807158   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:22.082652   13777 pod_ready.go:93] pod "kube-scheduler-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:22.082672   13777 pod_ready.go:82] duration metric: took 370.731346ms for pod "kube-scheduler-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:22.082683   13777 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:22.127515   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:22.269247   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:22.306663   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:22.306817   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:22.626885   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:22.769155   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:22.806860   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:22.807059   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:23.126514   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:23.268573   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:23.304984   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:23.308344   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:23.625436   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:23.768625   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:23.806414   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:23.807737   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:24.089626   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:24.126099   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:24.269316   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:24.306325   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:24.307191   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:24.626187   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:24.769060   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:24.805608   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:24.805998   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.284162   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:25.284693   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:25.304402   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:25.305601   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.625547   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:25.769118   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:25.805736   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.806413   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:26.125645   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:26.269608   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:26.307645   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:26.310692   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:26.588316   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:26.625476   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:26.768854   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:26.805985   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:26.806757   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.126110   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:27.268618   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:27.305185   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:27.305610   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.625855   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:27.768850   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:27.806424   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.806708   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:28.126113   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:28.269445   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:28.306451   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:28.306949   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:28.589535   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:28.625966   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:28.769016   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:28.805194   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:28.806093   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:29.125865   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:29.268979   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:29.306285   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.307264   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:29.625480   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:29.768316   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:29.807378   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.807652   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:30.126183   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:30.268852   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:30.307999   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.309034   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:30.625705   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:30.768655   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:30.807245   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.807772   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.088566   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:31.125747   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:31.268110   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:31.309583   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.310629   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:31.665764   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:31.768905   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:31.804955   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.806706   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.125989   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:32.269609   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:32.307383   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:32.309129   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.626614   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:32.768068   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:32.806872   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.807203   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:33.089535   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:33.125706   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.269256   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:33.305975   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.306252   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:33.706857   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.769189   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:33.805877   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.808046   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:34.126107   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.269399   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:34.306128   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:34.306283   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.625316   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.769118   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:34.805784   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.806308   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:35.131152   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.269262   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:35.305790   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.306213   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:35.587677   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:35.626384   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.769202   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:35.806266   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.806509   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.127407   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.270434   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:36.310101   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:36.311099   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.590031   13777 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:36.590052   13777 pod_ready.go:82] duration metric: took 14.507363417s for pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:36.590060   13777 pod_ready.go:39] duration metric: took 24.446938548s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:36.590077   13777 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:30:36.590151   13777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:36.618197   13777 api_server.go:72] duration metric: took 27.052704342s to wait for apiserver process to appear ...
	I0910 17:30:36.618222   13777 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:30:36.618255   13777 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0910 17:30:36.624545   13777 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0910 17:30:36.625767   13777 api_server.go:141] control plane version: v1.31.0
	I0910 17:30:36.625787   13777 api_server.go:131] duration metric: took 7.55866ms to wait for apiserver health ...
	I0910 17:30:36.625795   13777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:30:36.628168   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.635782   13777 system_pods.go:59] 18 kube-system pods found
	I0910 17:30:36.635816   13777 system_pods.go:61] "coredns-6f6b679f8f-c5qxp" [5ce9784e-e567-4ff5-a7fc-cb8589c471c1] Running
	I0910 17:30:36.635828   13777 system_pods.go:61] "csi-hostpath-attacher-0" [e5afcda1-955a-445a-95b8-dc286510fa6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:36.635837   13777 system_pods.go:61] "csi-hostpath-resizer-0" [5ab24cbf-8d77-43c3-9db2-6e06eed48352] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:36.635848   13777 system_pods.go:61] "csi-hostpathplugin-8hg5b" [f919643c-2604-4be0-8895-fe335d9c578a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:36.635853   13777 system_pods.go:61] "etcd-addons-306463" [dd177bb5-fe2a-4136-a871-92cd0f322fce] Running
	I0910 17:30:36.635862   13777 system_pods.go:61] "kube-apiserver-addons-306463" [7c3b5014-0b97-43e9-b162-3856dabfa5c1] Running
	I0910 17:30:36.635868   13777 system_pods.go:61] "kube-controller-manager-addons-306463" [bd143d52-b147-4e2b-8221-4b4c215500f8] Running
	I0910 17:30:36.635878   13777 system_pods.go:61] "kube-ingress-dns-minikube" [33998c91-0157-46f1-aa90-c6001166fff3] Running
	I0910 17:30:36.635884   13777 system_pods.go:61] "kube-proxy-js72f" [97604350-aebe-4a6c-b687-0204de19c3f5] Running
	I0910 17:30:36.635890   13777 system_pods.go:61] "kube-scheduler-addons-306463" [6eb6466c-c3d4-4e16-b246-c964865de3f6] Running
	I0910 17:30:36.635900   13777 system_pods.go:61] "metrics-server-84c5f94fbc-q6wcq" [4dc23d17-89f0-47a5-8880-0cf317f8a901] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:36.635909   13777 system_pods.go:61] "nvidia-device-plugin-daemonset-smwnt" [cf2f1df4-c2cd-4ab3-927a-16595a20e831] Running
	I0910 17:30:36.635921   13777 system_pods.go:61] "registry-66c9cd494c-6qxxb" [e9ac504f-2687-4fc9-bc82-285fcdbd1c77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:36.635932   13777 system_pods.go:61] "registry-proxy-dmz6w" [61812c3a-2248-430b-97e8-3b188671e0eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:36.635944   13777 system_pods.go:61] "snapshot-controller-56fcc65765-nnnw7" [5edd6128-e9f7-431b-822d-49f5ef92d0af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.635956   13777 system_pods.go:61] "snapshot-controller-56fcc65765-w9ln4" [1a1094b3-ec64-4401-b8f6-8812fa8ed85d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.635965   13777 system_pods.go:61] "storage-provisioner" [6196330e-c966-44c2-aedd-6dc5e570c6e5] Running
	I0910 17:30:36.635976   13777 system_pods.go:61] "tiller-deploy-b48cc5f79-4jxbr" [1dfb2d44-f679-47b9-8f2d-4d144742e3a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:36.635989   13777 system_pods.go:74] duration metric: took 10.187442ms to wait for pod list to return data ...
	I0910 17:30:36.636002   13777 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:30:36.640110   13777 default_sa.go:45] found service account: "default"
	I0910 17:30:36.640132   13777 default_sa.go:55] duration metric: took 4.119977ms for default service account to be created ...
	I0910 17:30:36.640142   13777 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:30:36.647574   13777 system_pods.go:86] 18 kube-system pods found
	I0910 17:30:36.647597   13777 system_pods.go:89] "coredns-6f6b679f8f-c5qxp" [5ce9784e-e567-4ff5-a7fc-cb8589c471c1] Running
	I0910 17:30:36.647606   13777 system_pods.go:89] "csi-hostpath-attacher-0" [e5afcda1-955a-445a-95b8-dc286510fa6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:36.647612   13777 system_pods.go:89] "csi-hostpath-resizer-0" [5ab24cbf-8d77-43c3-9db2-6e06eed48352] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:36.647620   13777 system_pods.go:89] "csi-hostpathplugin-8hg5b" [f919643c-2604-4be0-8895-fe335d9c578a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:36.647624   13777 system_pods.go:89] "etcd-addons-306463" [dd177bb5-fe2a-4136-a871-92cd0f322fce] Running
	I0910 17:30:36.647629   13777 system_pods.go:89] "kube-apiserver-addons-306463" [7c3b5014-0b97-43e9-b162-3856dabfa5c1] Running
	I0910 17:30:36.647632   13777 system_pods.go:89] "kube-controller-manager-addons-306463" [bd143d52-b147-4e2b-8221-4b4c215500f8] Running
	I0910 17:30:36.647637   13777 system_pods.go:89] "kube-ingress-dns-minikube" [33998c91-0157-46f1-aa90-c6001166fff3] Running
	I0910 17:30:36.647640   13777 system_pods.go:89] "kube-proxy-js72f" [97604350-aebe-4a6c-b687-0204de19c3f5] Running
	I0910 17:30:36.647644   13777 system_pods.go:89] "kube-scheduler-addons-306463" [6eb6466c-c3d4-4e16-b246-c964865de3f6] Running
	I0910 17:30:36.647649   13777 system_pods.go:89] "metrics-server-84c5f94fbc-q6wcq" [4dc23d17-89f0-47a5-8880-0cf317f8a901] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:36.647653   13777 system_pods.go:89] "nvidia-device-plugin-daemonset-smwnt" [cf2f1df4-c2cd-4ab3-927a-16595a20e831] Running
	I0910 17:30:36.647660   13777 system_pods.go:89] "registry-66c9cd494c-6qxxb" [e9ac504f-2687-4fc9-bc82-285fcdbd1c77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:36.647668   13777 system_pods.go:89] "registry-proxy-dmz6w" [61812c3a-2248-430b-97e8-3b188671e0eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:36.647676   13777 system_pods.go:89] "snapshot-controller-56fcc65765-nnnw7" [5edd6128-e9f7-431b-822d-49f5ef92d0af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.647684   13777 system_pods.go:89] "snapshot-controller-56fcc65765-w9ln4" [1a1094b3-ec64-4401-b8f6-8812fa8ed85d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.647688   13777 system_pods.go:89] "storage-provisioner" [6196330e-c966-44c2-aedd-6dc5e570c6e5] Running
	I0910 17:30:36.647693   13777 system_pods.go:89] "tiller-deploy-b48cc5f79-4jxbr" [1dfb2d44-f679-47b9-8f2d-4d144742e3a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:36.647702   13777 system_pods.go:126] duration metric: took 7.55431ms to wait for k8s-apps to be running ...
	I0910 17:30:36.647708   13777 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:30:36.647747   13777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:30:36.688724   13777 system_svc.go:56] duration metric: took 40.998614ms WaitForService to wait for kubelet
	I0910 17:30:36.688757   13777 kubeadm.go:582] duration metric: took 27.123268565s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:30:36.688785   13777 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:30:36.692318   13777 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:30:36.692341   13777 node_conditions.go:123] node cpu capacity is 2
	I0910 17:30:36.692353   13777 node_conditions.go:105] duration metric: took 3.562021ms to run NodePressure ...
	I0910 17:30:36.692364   13777 start.go:241] waiting for startup goroutines ...
	I0910 17:30:36.769013   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:36.805343   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.807812   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.125928   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.268408   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:37.307358   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.307370   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:37.626450   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.769104   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:37.807631   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.808032   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:38.410369   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:38.410675   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:38.410845   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:38.411724   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.626551   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.772173   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:38.813605   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:38.813975   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:39.126089   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.268594   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:39.306434   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:39.307212   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:39.627575   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.769119   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:39.806793   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:39.806955   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:40.126013   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.269594   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:40.307652   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:40.308116   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:40.626874   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.772237   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:40.809133   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:40.810841   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:41.126532   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.268653   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:41.310669   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:41.310958   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:41.638682   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.769185   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:41.805908   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:41.805996   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.125541   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.274727   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:42.314152   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:42.314527   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.625893   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.769480   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:42.805680   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.812721   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.125909   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.269084   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:43.306576   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.306976   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:43.715505   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.771618   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:43.805941   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.806723   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.124772   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.269280   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:44.306120   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:44.306950   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.625991   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.768665   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:44.805454   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.807495   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.126730   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.269364   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:45.306168   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.306714   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.631613   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.880383   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:45.883658   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.884726   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.127460   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.269296   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:46.306086   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:46.306509   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.625344   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.769098   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:46.806534   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.806996   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.124955   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.268498   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:47.306845   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.307880   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.626319   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.769012   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:47.806321   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.807436   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.125713   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.268906   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:48.306844   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:48.307565   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.626864   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.768630   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:48.805303   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.805947   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.131069   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.269163   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:49.305787   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.305910   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.625678   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.769604   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:49.809587   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.810440   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.125736   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.269191   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:50.306409   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.306739   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.625464   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.768892   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:50.805409   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.806243   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.125616   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.269034   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:51.306610   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.306959   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.625727   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.769169   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:51.806830   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.810306   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.125814   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.270051   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:52.306086   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:52.306192   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.626473   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.768916   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:52.806305   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.806665   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.125899   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.269024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:53.305645   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.307059   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.627179   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.770551   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:53.806405   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.806674   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.126024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.269166   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:54.371393   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.372173   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.625924   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.768277   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:54.806663   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.806832   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.125469   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.268594   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:55.305556   13777 kapi.go:107] duration metric: took 36.503445805s to wait for kubernetes.io/minikube-addons=registry ...
	I0910 17:30:55.313333   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.631573   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.768955   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:55.805802   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.125742   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.270140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:56.305860   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.625644   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.769297   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:56.806369   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.127588   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.270814   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:57.305110   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.625709   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.768903   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:57.805501   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:58.126627   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.269044   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:58.305193   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:58.626293   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.768712   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:58.804911   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.125828   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.269468   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:59.306105   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.625637   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.769614   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:59.807183   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:00.127716   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.270273   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:00.306165   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:00.625737   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.768998   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:00.805477   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.125499   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.269176   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:01.306304   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.626469   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.768732   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:01.805496   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.127553   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.269284   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:02.305980   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.628890   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.768835   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:02.805753   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.126003   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.268927   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:03.306626   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.626444   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.768871   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:03.805456   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.125203   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.268865   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:04.306288   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.627855   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.769364   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:04.806388   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:05.127184   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.275177   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:05.381315   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:05.625844   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.769267   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:05.805825   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.126554   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.268758   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:06.306366   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.627171   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.770092   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:06.806226   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.126711   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.269048   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:07.306150   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.625655   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.768742   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:07.806033   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.126084   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.269282   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:08.305959   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.626832   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.769318   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:08.807491   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:09.126941   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.275226   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:09.308718   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:09.626407   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.769717   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:09.813779   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.125731   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.269355   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:10.309604   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.627981   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.770045   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:10.870554   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.128226   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.268520   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:11.308019   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.626140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.769611   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:11.806272   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.126145   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.269471   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:12.306580   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.644024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.770364   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:12.807268   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.127370   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.271524   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:13.306201   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.626164   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.768629   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:13.805319   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.126256   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.604140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:14.604741   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.625880   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.769542   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:14.805015   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.129370   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.270705   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:15.306168   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.625569   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.769509   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:15.806404   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.127122   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.268486   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:16.306256   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.627609   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.768807   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:16.805284   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.126777   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:17.273904   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:17.306160   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.626219   13777 kapi.go:107] duration metric: took 58.005179225s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0910 17:31:17.769064   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:17.806337   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.269605   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:18.306821   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.768968   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:18.806084   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.269068   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:19.305883   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.768607   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:19.805388   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.269024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:20.305384   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.770422   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:20.805852   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.268928   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:21.305819   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.770149   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:21.806244   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.268897   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:22.305737   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.769883   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:22.811948   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.269476   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:23.306255   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.770445   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:23.806935   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.268635   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:24.305750   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.768424   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:24.805735   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.269370   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:25.306913   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.770284   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:25.805807   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.269063   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:26.305656   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.769396   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:26.805876   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.268241   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:27.307415   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.771452   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:27.806295   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.290195   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:28.311170   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.771373   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:28.805752   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.269499   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:29.306013   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.769982   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:29.871116   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.268936   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:30.305384   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.769209   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:30.806494   13777 kapi.go:107] duration metric: took 1m12.005153392s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0910 17:31:31.269701   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:31.769526   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:32.268540   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:32.771389   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:33.272123   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:33.769698   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:34.269894   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:34.769472   13777 kapi.go:107] duration metric: took 1m13.504330818s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0910 17:31:34.770991   13777 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-306463 cluster.
	I0910 17:31:34.772225   13777 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0910 17:31:34.773540   13777 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0910 17:31:34.774682   13777 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0910 17:31:34.775694   13777 addons.go:510] duration metric: took 1m25.210169317s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0910 17:31:34.775727   13777 start.go:246] waiting for cluster config update ...
	I0910 17:31:34.775743   13777 start.go:255] writing updated cluster config ...
	I0910 17:31:34.775953   13777 ssh_runner.go:195] Run: rm -f paused
	I0910 17:31:34.827173   13777 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 17:31:34.828957   13777 out.go:177] * Done! kubectl is now configured to use "addons-306463" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.240357765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990050240332083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526434,},InodesUsed:&UInt64Value{Value:185,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2847d66d-9a01-4be8-844a-e485dc21aaff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.241007559Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0b4649f-549d-4397-b43a-65ce6dd7f616 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.241068727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0b4649f-549d-4397-b43a-65ce6dd7f616 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.241525869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8d3b80ca19697f974829611946541fe4da060d6d55373edfbf9e15edf3534f5,PodSandboxId:634edff03c95301a882a37c72e3b249677be4404dc923fad8ec9f30cf13eb89a,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725990009197945155,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2be347cf-fef5-49ec-8123-8e3cf02ab859,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 84656cc0-f634-4b52-8551-d72d75859b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a074423e8ebdd410b47583f2b576777aea8a71a4ad9647b194772a90214f53d4,PodSandboxId:ce09f6762e26e4520e39dd28a705c95b9312ad8f6d0ab7885cd53880b4e0aadf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725990005333852404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee6952e4-9519-41d9-bcd1-f9113da1df63,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"
containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e,PodSandboxId:df8fa8192870cdf368c4f388ed9689e0d66ecaa5ff5bcecacbb07305e3865e56,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725989489716072720,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-rrx9m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63d978ce-6789-493d-a46f
-de2712ba51dd,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0dcc0b067c1fe33cab5925ccf93236b0b4235680f7460bc114b84a50691c3a5,PodSandboxId:203b4e990f94eb34108df6c75f63302a21901ab5de4b10d66645bffb8a76ff0e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989466907044617,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tddrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfd83841-db7b-49e3-9721-8b75e0cdd1c7,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4919ba67c923ab8d43533c369b87ef4ced592fbe0c0deb116fbbb857ebf533ae,PodSandboxId:409de028dc3d4ce1988813969f751c45cb1b7bec3478589a41e13f94d2bb567f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e645
7aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989462977955428,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp9t8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3fb5872-1ea0-4a79-a942-351d4c144608,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966e8fda1665b6c34bccfc37e68af3aa7ba5fc8c6643150d3573006c20f4faef,PodSandboxId:f7bfd95e6be866d636d9cb971d37220f839306f979253d93623cc5d17f36d789,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae
5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725989459800507449,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-9mkhq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9ca24bc-2998-4f22-943c-ca875f6ed7cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da547a7aead5af7079e6c4408d2460c00c2c3f0ddd2cb2942ec0adff4b01a99b,PodSandboxId:7
9f7b009c80604c6c8152b7557ea269ad1fe2ae7876298e80f20fa5a2e49c54f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1725989454186936320,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-dmz6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61812c3a-2248-430b-97e8-3b188671e0eb,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725989440357118697,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65f3b74cfc39b0ee9fa5464921aff8a5bf225157874cdb878b7ba91a8ace91fd,PodSandboxId:1f7c97e7118ed657a22e102550a0d75fc29b4dfdc7e4d2d7e9aa3c445e819305,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725989435011140038,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-smwnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf2f1df4-c2cd-4ab3-927a-16595a20e831,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794,PodSandboxId:2ad00bd68949be4886d451d94ee2da0c9daa4bf60aa60b778c350762f1581fca,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725989426323330056,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33998c91-0157-46f1-aa90-c6001166
fff3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINE
R_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399
351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172598939936638508
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:17259893993170662
21,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0b4649f-549d-4397-b43a-65ce6dd7f616 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.279097202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16f32409-57fc-45ad-bf9f-24f6f87b9276 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.279176575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16f32409-57fc-45ad-bf9f-24f6f87b9276 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.280653537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89339a76-afd6-43c6-b543-d152223273a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.281740166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990050281713671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526434,},InodesUsed:&UInt64Value{Value:185,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89339a76-afd6-43c6-b543-d152223273a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.282594393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55d73ae0-0e6a-4541-8bea-39555cdca995 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.282735612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55d73ae0-0e6a-4541-8bea-39555cdca995 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.283531635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8d3b80ca19697f974829611946541fe4da060d6d55373edfbf9e15edf3534f5,PodSandboxId:634edff03c95301a882a37c72e3b249677be4404dc923fad8ec9f30cf13eb89a,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725990009197945155,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2be347cf-fef5-49ec-8123-8e3cf02ab859,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 84656cc0-f634-4b52-8551-d72d75859b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a074423e8ebdd410b47583f2b576777aea8a71a4ad9647b194772a90214f53d4,PodSandboxId:ce09f6762e26e4520e39dd28a705c95b9312ad8f6d0ab7885cd53880b4e0aadf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725990005333852404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee6952e4-9519-41d9-bcd1-f9113da1df63,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"
containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e,PodSandboxId:df8fa8192870cdf368c4f388ed9689e0d66ecaa5ff5bcecacbb07305e3865e56,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725989489716072720,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-rrx9m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63d978ce-6789-493d-a46f
-de2712ba51dd,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0dcc0b067c1fe33cab5925ccf93236b0b4235680f7460bc114b84a50691c3a5,PodSandboxId:203b4e990f94eb34108df6c75f63302a21901ab5de4b10d66645bffb8a76ff0e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989466907044617,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tddrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfd83841-db7b-49e3-9721-8b75e0cdd1c7,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4919ba67c923ab8d43533c369b87ef4ced592fbe0c0deb116fbbb857ebf533ae,PodSandboxId:409de028dc3d4ce1988813969f751c45cb1b7bec3478589a41e13f94d2bb567f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e645
7aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989462977955428,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp9t8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3fb5872-1ea0-4a79-a942-351d4c144608,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966e8fda1665b6c34bccfc37e68af3aa7ba5fc8c6643150d3573006c20f4faef,PodSandboxId:f7bfd95e6be866d636d9cb971d37220f839306f979253d93623cc5d17f36d789,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae
5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725989459800507449,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-9mkhq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9ca24bc-2998-4f22-943c-ca875f6ed7cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da547a7aead5af7079e6c4408d2460c00c2c3f0ddd2cb2942ec0adff4b01a99b,PodSandboxId:7
9f7b009c80604c6c8152b7557ea269ad1fe2ae7876298e80f20fa5a2e49c54f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1725989454186936320,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-dmz6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61812c3a-2248-430b-97e8-3b188671e0eb,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725989440357118697,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65f3b74cfc39b0ee9fa5464921aff8a5bf225157874cdb878b7ba91a8ace91fd,PodSandboxId:1f7c97e7118ed657a22e102550a0d75fc29b4dfdc7e4d2d7e9aa3c445e819305,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725989435011140038,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-smwnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf2f1df4-c2cd-4ab3-927a-16595a20e831,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794,PodSandboxId:2ad00bd68949be4886d451d94ee2da0c9daa4bf60aa60b778c350762f1581fca,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725989426323330056,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33998c91-0157-46f1-aa90-c6001166
fff3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINE
R_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399
351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172598939936638508
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:17259893993170662
21,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55d73ae0-0e6a-4541-8bea-39555cdca995 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.320056614Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=367fe67a-fc83-4dfe-b344-954b22d0016d name=/runtime.v1.RuntimeService/Version
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.320169297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=367fe67a-fc83-4dfe-b344-954b22d0016d name=/runtime.v1.RuntimeService/Version
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.321463393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eebbf1ce-bfcf-4bc8-a5a4-0b563efc750a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.322741466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990050322708688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526434,},InodesUsed:&UInt64Value{Value:185,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eebbf1ce-bfcf-4bc8-a5a4-0b563efc750a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.323463221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fbbf9cd-d476-40d0-9a2b-212e4cce5a97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.323575133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fbbf9cd-d476-40d0-9a2b-212e4cce5a97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.324752380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8d3b80ca19697f974829611946541fe4da060d6d55373edfbf9e15edf3534f5,PodSandboxId:634edff03c95301a882a37c72e3b249677be4404dc923fad8ec9f30cf13eb89a,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725990009197945155,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2be347cf-fef5-49ec-8123-8e3cf02ab859,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 84656cc0-f634-4b52-8551-d72d75859b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a074423e8ebdd410b47583f2b576777aea8a71a4ad9647b194772a90214f53d4,PodSandboxId:ce09f6762e26e4520e39dd28a705c95b9312ad8f6d0ab7885cd53880b4e0aadf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725990005333852404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee6952e4-9519-41d9-bcd1-f9113da1df63,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"
containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e,PodSandboxId:df8fa8192870cdf368c4f388ed9689e0d66ecaa5ff5bcecacbb07305e3865e56,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725989489716072720,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-rrx9m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63d978ce-6789-493d-a46f
-de2712ba51dd,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0dcc0b067c1fe33cab5925ccf93236b0b4235680f7460bc114b84a50691c3a5,PodSandboxId:203b4e990f94eb34108df6c75f63302a21901ab5de4b10d66645bffb8a76ff0e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989466907044617,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tddrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfd83841-db7b-49e3-9721-8b75e0cdd1c7,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4919ba67c923ab8d43533c369b87ef4ced592fbe0c0deb116fbbb857ebf533ae,PodSandboxId:409de028dc3d4ce1988813969f751c45cb1b7bec3478589a41e13f94d2bb567f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e645
7aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989462977955428,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp9t8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3fb5872-1ea0-4a79-a942-351d4c144608,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966e8fda1665b6c34bccfc37e68af3aa7ba5fc8c6643150d3573006c20f4faef,PodSandboxId:f7bfd95e6be866d636d9cb971d37220f839306f979253d93623cc5d17f36d789,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae
5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725989459800507449,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-9mkhq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9ca24bc-2998-4f22-943c-ca875f6ed7cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da547a7aead5af7079e6c4408d2460c00c2c3f0ddd2cb2942ec0adff4b01a99b,PodSandboxId:7
9f7b009c80604c6c8152b7557ea269ad1fe2ae7876298e80f20fa5a2e49c54f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1725989454186936320,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-dmz6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61812c3a-2248-430b-97e8-3b188671e0eb,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725989440357118697,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65f3b74cfc39b0ee9fa5464921aff8a5bf225157874cdb878b7ba91a8ace91fd,PodSandboxId:1f7c97e7118ed657a22e102550a0d75fc29b4dfdc7e4d2d7e9aa3c445e819305,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725989435011140038,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-smwnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf2f1df4-c2cd-4ab3-927a-16595a20e831,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794,PodSandboxId:2ad00bd68949be4886d451d94ee2da0c9daa4bf60aa60b778c350762f1581fca,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725989426323330056,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33998c91-0157-46f1-aa90-c6001166
fff3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINE
R_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399
351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172598939936638508
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:17259893993170662
21,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fbbf9cd-d476-40d0-9a2b-212e4cce5a97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.370283901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c04b0c3-3115-452a-8478-4e74b97a25c9 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.370368893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c04b0c3-3115-452a-8478-4e74b97a25c9 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.371593483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f3d4028-297e-4f24-be48-040e93ba7662 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.373270312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990050373212655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526434,},InodesUsed:&UInt64Value{Value:185,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f3d4028-297e-4f24-be48-040e93ba7662 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.373759278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b8d30a8-8043-4579-8f68-eb67dd326735 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.373828492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b8d30a8-8043-4579-8f68-eb67dd326735 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:40:50 addons-306463 crio[672]: time="2024-09-10 17:40:50.374252062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8d3b80ca19697f974829611946541fe4da060d6d55373edfbf9e15edf3534f5,PodSandboxId:634edff03c95301a882a37c72e3b249677be4404dc923fad8ec9f30cf13eb89a,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725990009197945155,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-2be347cf-fef5-49ec-8123-8e3cf02ab859,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 84656cc0-f634-4b52-8551-d72d75859b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a074423e8ebdd410b47583f2b576777aea8a71a4ad9647b194772a90214f53d4,PodSandboxId:ce09f6762e26e4520e39dd28a705c95b9312ad8f6d0ab7885cd53880b4e0aadf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725990005333852404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee6952e4-9519-41d9-bcd1-f9113da1df63,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"
containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e,PodSandboxId:df8fa8192870cdf368c4f388ed9689e0d66ecaa5ff5bcecacbb07305e3865e56,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725989489716072720,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-rrx9m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63d978ce-6789-493d-a46f
-de2712ba51dd,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0dcc0b067c1fe33cab5925ccf93236b0b4235680f7460bc114b84a50691c3a5,PodSandboxId:203b4e990f94eb34108df6c75f63302a21901ab5de4b10d66645bffb8a76ff0e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989466907044617,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tddrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfd83841-db7b-49e3-9721-8b75e0cdd1c7,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4919ba67c923ab8d43533c369b87ef4ced592fbe0c0deb116fbbb857ebf533ae,PodSandboxId:409de028dc3d4ce1988813969f751c45cb1b7bec3478589a41e13f94d2bb567f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e645
7aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989462977955428,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp9t8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3fb5872-1ea0-4a79-a942-351d4c144608,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:966e8fda1665b6c34bccfc37e68af3aa7ba5fc8c6643150d3573006c20f4faef,PodSandboxId:f7bfd95e6be866d636d9cb971d37220f839306f979253d93623cc5d17f36d789,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae
5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725989459800507449,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-9mkhq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9ca24bc-2998-4f22-943c-ca875f6ed7cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da547a7aead5af7079e6c4408d2460c00c2c3f0ddd2cb2942ec0adff4b01a99b,PodSandboxId:7
9f7b009c80604c6c8152b7557ea269ad1fe2ae7876298e80f20fa5a2e49c54f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1725989454186936320,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-dmz6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61812c3a-2248-430b-97e8-3b188671e0eb,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725989440357118697,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65f3b74cfc39b0ee9fa5464921aff8a5bf225157874cdb878b7ba91a8ace91fd,PodSandboxId:1f7c97e7118ed657a22e102550a0d75fc29b4dfdc7e4d2d7e9aa3c445e819305,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725989435011140038,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-smwnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf2f1df4-c2cd-4ab3-927a-16595a20e831,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794,PodSandboxId:2ad00bd68949be4886d451d94ee2da0c9daa4bf60aa60b778c350762f1581fca,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725989426323330056,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33998c91-0157-46f1-aa90-c6001166
fff3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINE
R_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399
351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172598939936638508
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:17259893993170662
21,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b8d30a8-8043-4579-8f68-eb67dd326735 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	d8d3b80ca1969       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             41 seconds ago      Exited              helper-pod                 0                   634edff03c953       helper-pod-delete-pvc-2be347cf-fef5-49ec-8123-8e3cf02ab859
	a074423e8ebdd       docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa                            45 seconds ago      Exited              busybox                    0                   ce09f6762e26e       test-local-path
	582aef687e6f1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                   0                   d0831ebcc1f1d       gcp-auth-89d5ffd79-9cff5
	8d4dc66b78cf3       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                 0                   df8fa8192870c       ingress-nginx-controller-bc57996ff-rrx9m
	b0dcc0b067c1f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                      0                   203b4e990f94e       ingress-nginx-admission-patch-tddrl
	4919ba67c923a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                     0                   409de028dc3d4       ingress-nginx-admission-create-zp9t8
	966e8fda1665b       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               9 minutes ago       Running             cloud-spanner-emulator     0                   f7bfd95e6be86       cloud-spanner-emulator-769b77f747-9mkhq
	da547a7aead5a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4              9 minutes ago       Exited              registry-proxy             0                   79f7b009c8060       registry-proxy-dmz6w
	9e0270fff8718       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server             0                   bf5609ea0b023       metrics-server-84c5f94fbc-q6wcq
	65f3b74cfc39b       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   1f7c97e7118ed       nvidia-device-plugin-daemonset-smwnt
	c3927bb112d8e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns       0                   2ad00bd68949b       kube-ingress-dns-minikube
	bc2884c8e7918       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner        0                   f3d0ecd016c61       storage-provisioner
	0a215f27453dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             10 minutes ago      Running             coredns                    0                   a8d7383a3c4c8       coredns-6f6b679f8f-c5qxp
	3a73d39390d5a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             10 minutes ago      Running             kube-proxy                 0                   8987d0bb394a5       kube-proxy-js72f
	1b2fd106868bc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             10 minutes ago      Running             kube-controller-manager    0                   bff13732bced4       kube-controller-manager-addons-306463
	f698d8d7966b0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             10 minutes ago      Running             kube-scheduler             0                   3e898142a1588       kube-scheduler-addons-306463
	9820f2fa1dd2a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                       0                   636a4a297aa53       etcd-addons-306463
	a702e238565e0       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             10 minutes ago      Running             kube-apiserver             0                   bdfc49df82eed       kube-apiserver-addons-306463
	
	
	==> coredns [0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b] <==
	[INFO] 127.0.0.1:46294 - 34342 "HINFO IN 2988755105619345519.8178505747039127944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010883316s
	[INFO] 10.244.0.7:51528 - 9833 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000522483s
	[INFO] 10.244.0.7:51528 - 52590 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000287548s
	[INFO] 10.244.0.7:49547 - 11105 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000080302s
	[INFO] 10.244.0.7:49547 - 54119 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038918s
	[INFO] 10.244.0.7:51045 - 63866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058098s
	[INFO] 10.244.0.7:51045 - 57464 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006068s
	[INFO] 10.244.0.7:48884 - 49406 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072191s
	[INFO] 10.244.0.7:48884 - 18943 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044584s
	[INFO] 10.244.0.7:48605 - 63647 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000058049s
	[INFO] 10.244.0.7:48605 - 26013 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00010894s
	[INFO] 10.244.0.7:53898 - 7835 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034622s
	[INFO] 10.244.0.7:53898 - 30617 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003803s
	[INFO] 10.244.0.7:41577 - 5082 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072855s
	[INFO] 10.244.0.7:41577 - 14808 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000127251s
	[INFO] 10.244.0.7:35153 - 44630 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000117348s
	[INFO] 10.244.0.7:35153 - 21591 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061476s
	[INFO] 10.244.0.22:53652 - 52736 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000525847s
	[INFO] 10.244.0.22:51909 - 33747 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000080647s
	[INFO] 10.244.0.22:59992 - 15038 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160421s
	[INFO] 10.244.0.22:50214 - 27016 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000071597s
	[INFO] 10.244.0.22:58245 - 14301 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127195s
	[INFO] 10.244.0.22:46404 - 10714 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079794s
	[INFO] 10.244.0.22:37437 - 16123 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001244875s
	[INFO] 10.244.0.22:55509 - 30140 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001661686s
	
	
	==> describe nodes <==
	Name:               addons-306463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-306463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=addons-306463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_30_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-306463
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:30:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-306463
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:40:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:40:36 +0000   Tue, 10 Sep 2024 17:30:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:40:36 +0000   Tue, 10 Sep 2024 17:30:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:40:36 +0000   Tue, 10 Sep 2024 17:30:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:40:36 +0000   Tue, 10 Sep 2024 17:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    addons-306463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd3fd5b0d8a84e1595be7f0c7913d0fd
	  System UUID:                dd3fd5b0-d8a8-4e15-95be-7f0c7913d0fd
	  Boot ID:                    41ce101e-c89c-4773-988f-9e0f2e4ee815
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-9mkhq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  gcp-auth                    gcp-auth-89d5ffd79-9cff5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-rrx9m    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-c5qxp                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-306463                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-306463                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-306463       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-js72f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-306463                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-q6wcq             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 nvidia-device-plugin-daemonset-smwnt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-306463 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-306463 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-306463 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-306463 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-306463 event: Registered Node addons-306463 in Controller
	
	
	==> dmesg <==
	[  +5.032447] kauditd_printk_skb: 99 callbacks suppressed
	[  +5.405994] kauditd_printk_skb: 129 callbacks suppressed
	[  +6.100135] kauditd_printk_skb: 98 callbacks suppressed
	[ +14.038409] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.181145] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.627457] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.626250] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:31] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.055474] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.112841] kauditd_printk_skb: 31 callbacks suppressed
	[ +13.239386] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.062688] kauditd_printk_skb: 49 callbacks suppressed
	[  +9.206322] kauditd_printk_skb: 9 callbacks suppressed
	[Sep10 17:32] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:34] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:39] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.622351] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.013540] kauditd_printk_skb: 39 callbacks suppressed
	[Sep10 17:40] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.890154] kauditd_printk_skb: 20 callbacks suppressed
	[ +15.600296] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.244502] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.735054] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.942009] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c] <==
	{"level":"warn","ts":"2024-09-10T17:30:45.866761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.415869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:30:45.866856Z","caller":"traceutil/trace.go:171","msg":"trace[1068047252] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:927; }","duration":"110.525419ms","start":"2024-09-10T17:30:45.756319Z","end":"2024-09-10T17:30:45.866845Z","steps":["trace[1068047252] 'range keys from in-memory index tree'  (duration: 110.294ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:30:45.867044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.860548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-10T17:30:45.867096Z","caller":"traceutil/trace.go:171","msg":"trace[1809753364] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:927; }","duration":"102.917852ms","start":"2024-09-10T17:30:45.764169Z","end":"2024-09-10T17:30:45.867087Z","steps":["trace[1809753364] 'range keys from in-memory index tree'  (duration: 102.76827ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:30:52.059608Z","caller":"traceutil/trace.go:171","msg":"trace[1298428410] transaction","detail":"{read_only:false; response_revision:937; number_of_response:1; }","duration":"112.30505ms","start":"2024-09-10T17:30:51.947283Z","end":"2024-09-10T17:30:52.059589Z","steps":["trace[1298428410] 'process raft request'  (duration: 112.162762ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:30:53.611772Z","caller":"traceutil/trace.go:171","msg":"trace[1258985527] linearizableReadLoop","detail":"{readStateIndex:964; appliedIndex:963; }","duration":"178.702934ms","start":"2024-09-10T17:30:53.433055Z","end":"2024-09-10T17:30:53.611758Z","steps":["trace[1258985527] 'read index received'  (duration: 178.578616ms)","trace[1258985527] 'applied index is now lower than readState.Index'  (duration: 123.822µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T17:30:53.611866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.792771ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:30:53.611943Z","caller":"traceutil/trace.go:171","msg":"trace[1456752792] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:938; }","duration":"178.885873ms","start":"2024-09-10T17:30:53.433052Z","end":"2024-09-10T17:30:53.611937Z","steps":["trace[1456752792] 'agreement among raft nodes before linearized reading'  (duration: 178.78055ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:31:14.585857Z","caller":"traceutil/trace.go:171","msg":"trace[1736615180] linearizableReadLoop","detail":"{readStateIndex:1118; appliedIndex:1117; }","duration":"331.383713ms","start":"2024-09-10T17:31:14.254456Z","end":"2024-09-10T17:31:14.585840Z","steps":["trace[1736615180] 'read index received'  (duration: 331.171762ms)","trace[1736615180] 'applied index is now lower than readState.Index'  (duration: 211.53µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T17:31:14.585995Z","caller":"traceutil/trace.go:171","msg":"trace[616425486] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"377.322054ms","start":"2024-09-10T17:31:14.208667Z","end":"2024-09-10T17:31:14.585989Z","steps":["trace[616425486] 'process raft request'  (duration: 377.062724ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.586082Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:31:14.208652Z","time spent":"377.361583ms","remote":"127.0.0.1:39804","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1074 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-10T17:31:14.586243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.149848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:14.586321Z","caller":"traceutil/trace.go:171","msg":"trace[1902664727] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1087; }","duration":"295.232349ms","start":"2024-09-10T17:31:14.291079Z","end":"2024-09-10T17:31:14.586312Z","steps":["trace[1902664727] 'agreement among raft nodes before linearized reading'  (duration: 295.125426ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.586354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.675068ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:14.586388Z","caller":"traceutil/trace.go:171","msg":"trace[681843452] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1087; }","duration":"153.706065ms","start":"2024-09-10T17:31:14.432677Z","end":"2024-09-10T17:31:14.586383Z","steps":["trace[681843452] 'agreement among raft nodes before linearized reading'  (duration: 153.67064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.586334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.898535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:14.587217Z","caller":"traceutil/trace.go:171","msg":"trace[59462955] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1087; }","duration":"332.778889ms","start":"2024-09-10T17:31:14.254428Z","end":"2024-09-10T17:31:14.587207Z","steps":["trace[59462955] 'agreement among raft nodes before linearized reading'  (duration: 331.885636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.587550Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:31:14.254397Z","time spent":"333.142093ms","remote":"127.0.0.1:39820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-10T17:31:25.693826Z","caller":"traceutil/trace.go:171","msg":"trace[916338974] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"175.709853ms","start":"2024-09-10T17:31:25.518097Z","end":"2024-09-10T17:31:25.693806Z","steps":["trace[916338974] 'process raft request'  (duration: 175.242522ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:31:28.273694Z","caller":"traceutil/trace.go:171","msg":"trace[326156197] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"145.50673ms","start":"2024-09-10T17:31:28.128165Z","end":"2024-09-10T17:31:28.273671Z","steps":["trace[326156197] 'process raft request'  (duration: 145.173512ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:31:33.252659Z","caller":"traceutil/trace.go:171","msg":"trace[803236101] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"168.076485ms","start":"2024-09-10T17:31:33.084566Z","end":"2024-09-10T17:31:33.252643Z","steps":["trace[803236101] 'process raft request'  (duration: 167.526703ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:39:54.247760Z","caller":"traceutil/trace.go:171","msg":"trace[1408959823] transaction","detail":"{read_only:false; response_revision:2000; number_of_response:1; }","duration":"120.176741ms","start":"2024-09-10T17:39:54.127561Z","end":"2024-09-10T17:39:54.247737Z","steps":["trace[1408959823] 'process raft request'  (duration: 120.058138ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:40:00.350559Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1527}
	{"level":"info","ts":"2024-09-10T17:40:00.394567Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1527,"took":"43.479842ms","hash":4077854701,"current-db-size-bytes":6705152,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3575808,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-10T17:40:00.394619Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4077854701,"revision":1527,"compact-revision":-1}
	
	
	==> gcp-auth [582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf] <==
	2024/09/10 17:31:33 GCP Auth Webhook started!
	2024/09/10 17:31:34 Ready to marshal response ...
	2024/09/10 17:31:34 Ready to write response ...
	2024/09/10 17:31:35 Ready to marshal response ...
	2024/09/10 17:31:35 Ready to write response ...
	2024/09/10 17:31:35 Ready to marshal response ...
	2024/09/10 17:31:35 Ready to write response ...
	2024/09/10 17:39:48 Ready to marshal response ...
	2024/09/10 17:39:48 Ready to write response ...
	2024/09/10 17:39:48 Ready to marshal response ...
	2024/09/10 17:39:48 Ready to write response ...
	2024/09/10 17:39:54 Ready to marshal response ...
	2024/09/10 17:39:54 Ready to write response ...
	2024/09/10 17:39:59 Ready to marshal response ...
	2024/09/10 17:39:59 Ready to write response ...
	2024/09/10 17:39:59 Ready to marshal response ...
	2024/09/10 17:39:59 Ready to write response ...
	2024/09/10 17:40:08 Ready to marshal response ...
	2024/09/10 17:40:08 Ready to write response ...
	2024/09/10 17:40:19 Ready to marshal response ...
	2024/09/10 17:40:19 Ready to write response ...
	
	
	==> kernel <==
	 17:40:50 up 11 min,  0 users,  load average: 0.60, 0.51, 0.42
	Linux addons-306463 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0910 17:31:45.673482       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.1.101:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.1.101:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.1.101:443: connect: connection refused" logger="UnhandledError"
	E0910 17:31:45.675824       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.1.101:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.1.101:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.1.101:443: connect: connection refused" logger="UnhandledError"
	I0910 17:31:45.735181       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0910 17:39:44.157338       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0910 17:39:45.189265       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0910 17:40:02.002508       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0910 17:40:09.586498       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:09.594344       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:09.601249       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:24.601459       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0910 17:40:34.932458       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:34.932521       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:40:34.975423       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:34.975477       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:40:34.992396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:34.992451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:40:35.118983       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:35.119164       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0910 17:40:36.120579       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0910 17:40:36.126608       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0910 17:40:37.403724       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:38.410312       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54] <==
	I0910 17:40:36.509025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-306463"
	W0910 17:40:37.131167       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:37.131279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:40:37.513587       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:37.513640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:40:37.881189       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:37.881244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:40:38.908132       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0910 17:40:38.908190       1 shared_informer.go:320] Caches are synced for resource quota
	W0910 17:40:39.007877       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:39.008012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:40:39.362701       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0910 17:40:39.362735       1 shared_informer.go:320] Caches are synced for garbage collector
	W0910 17:40:40.325167       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:40.325236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:40:41.208332       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:41.208404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:40:41.886671       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="5.838µs"
	W0910 17:40:42.415100       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:42.415157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:40:46.452811       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:46.453049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:40:49.320298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.523µs"
	W0910 17:40:50.505613       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:50.505676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:30:10.959254       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:30:10.977328       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.144"]
	E0910 17:30:10.977427       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:30:11.055345       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:30:11.055408       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:30:11.055442       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:30:11.058990       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:30:11.059418       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:30:11.059455       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:30:11.061020       1 config.go:197] "Starting service config controller"
	I0910 17:30:11.061045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:30:11.061068       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:30:11.061072       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:30:11.061523       1 config.go:326] "Starting node config controller"
	I0910 17:30:11.061530       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:30:11.161679       1 shared_informer.go:320] Caches are synced for node config
	I0910 17:30:11.161709       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:30:11.161736       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495] <==
	W0910 17:30:01.806509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:30:01.806539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.806590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:30:01.806622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.806670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 17:30:01.806700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.806755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:01.806784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.810146       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 17:30:01.811967       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 17:30:02.656866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 17:30:02.656998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:02.852652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:30:02.852741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:02.914536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 17:30:02.914590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:02.973206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 17:30:02.973257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:03.010457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:30:03.010597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:03.040102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:03.040268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:03.048988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 17:30:03.049072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 17:30:03.383329       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 17:40:42 addons-306463 kubelet[1220]: I0910 17:40:42.617297    1220 scope.go:117] "RemoveContainer" containerID="60ea43706f173cd29902536784ce8971a651502f1803a4e81dcb999beb58f52c"
	Sep 10 17:40:42 addons-306463 kubelet[1220]: E0910 17:40:42.617975    1220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ea43706f173cd29902536784ce8971a651502f1803a4e81dcb999beb58f52c\": container with ID starting with 60ea43706f173cd29902536784ce8971a651502f1803a4e81dcb999beb58f52c not found: ID does not exist" containerID="60ea43706f173cd29902536784ce8971a651502f1803a4e81dcb999beb58f52c"
	Sep 10 17:40:42 addons-306463 kubelet[1220]: I0910 17:40:42.618023    1220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ea43706f173cd29902536784ce8971a651502f1803a4e81dcb999beb58f52c"} err="failed to get container status \"60ea43706f173cd29902536784ce8971a651502f1803a4e81dcb999beb58f52c\": rpc error: code = NotFound desc = could not find container \"60ea43706f173cd29902536784ce8971a651502f1803a4e81dcb999beb58f52c\": container with ID starting with 60ea43706f173cd29902536784ce8971a651502f1803a4e81dcb999beb58f52c not found: ID does not exist"
	Sep 10 17:40:44 addons-306463 kubelet[1220]: I0910 17:40:44.123469    1220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760398d8-bb5a-44a2-b788-de63c290c3fc" path="/var/lib/kubelet/pods/760398d8-bb5a-44a2-b788-de63c290c3fc/volumes"
	Sep 10 17:40:44 addons-306463 kubelet[1220]: E0910 17:40:44.407259    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990044406789760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526434,},InodesUsed:&UInt64Value{Value:185,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:40:44 addons-306463 kubelet[1220]: E0910 17:40:44.407297    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990044406789760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526434,},InodesUsed:&UInt64Value{Value:185,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:40:48 addons-306463 kubelet[1220]: I0910 17:40:48.983860    1220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ae95440d-a935-441a-9b28-0da2f758cba7-gcp-creds\") pod \"ae95440d-a935-441a-9b28-0da2f758cba7\" (UID: \"ae95440d-a935-441a-9b28-0da2f758cba7\") "
	Sep 10 17:40:48 addons-306463 kubelet[1220]: I0910 17:40:48.983987    1220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn6xh\" (UniqueName: \"kubernetes.io/projected/ae95440d-a935-441a-9b28-0da2f758cba7-kube-api-access-qn6xh\") pod \"ae95440d-a935-441a-9b28-0da2f758cba7\" (UID: \"ae95440d-a935-441a-9b28-0da2f758cba7\") "
	Sep 10 17:40:48 addons-306463 kubelet[1220]: I0910 17:40:48.984684    1220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae95440d-a935-441a-9b28-0da2f758cba7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ae95440d-a935-441a-9b28-0da2f758cba7" (UID: "ae95440d-a935-441a-9b28-0da2f758cba7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 10 17:40:48 addons-306463 kubelet[1220]: I0910 17:40:48.996038    1220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae95440d-a935-441a-9b28-0da2f758cba7-kube-api-access-qn6xh" (OuterVolumeSpecName: "kube-api-access-qn6xh") pod "ae95440d-a935-441a-9b28-0da2f758cba7" (UID: "ae95440d-a935-441a-9b28-0da2f758cba7"). InnerVolumeSpecName "kube-api-access-qn6xh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.085122    1220 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ae95440d-a935-441a-9b28-0da2f758cba7-gcp-creds\") on node \"addons-306463\" DevicePath \"\""
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.085147    1220 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qn6xh\" (UniqueName: \"kubernetes.io/projected/ae95440d-a935-441a-9b28-0da2f758cba7-kube-api-access-qn6xh\") on node \"addons-306463\" DevicePath \"\""
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.688184    1220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzsm2\" (UniqueName: \"kubernetes.io/projected/e9ac504f-2687-4fc9-bc82-285fcdbd1c77-kube-api-access-qzsm2\") pod \"e9ac504f-2687-4fc9-bc82-285fcdbd1c77\" (UID: \"e9ac504f-2687-4fc9-bc82-285fcdbd1c77\") "
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.690416    1220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ac504f-2687-4fc9-bc82-285fcdbd1c77-kube-api-access-qzsm2" (OuterVolumeSpecName: "kube-api-access-qzsm2") pod "e9ac504f-2687-4fc9-bc82-285fcdbd1c77" (UID: "e9ac504f-2687-4fc9-bc82-285fcdbd1c77"). InnerVolumeSpecName "kube-api-access-qzsm2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.724829    1220 scope.go:117] "RemoveContainer" containerID="abac808664a1614c6734ba9e526843e4f9a2ff40f6a8655101e9419ca5c67c3a"
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.760805    1220 scope.go:117] "RemoveContainer" containerID="abac808664a1614c6734ba9e526843e4f9a2ff40f6a8655101e9419ca5c67c3a"
	Sep 10 17:40:49 addons-306463 kubelet[1220]: E0910 17:40:49.772161    1220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abac808664a1614c6734ba9e526843e4f9a2ff40f6a8655101e9419ca5c67c3a\": container with ID starting with abac808664a1614c6734ba9e526843e4f9a2ff40f6a8655101e9419ca5c67c3a not found: ID does not exist" containerID="abac808664a1614c6734ba9e526843e4f9a2ff40f6a8655101e9419ca5c67c3a"
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.772235    1220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abac808664a1614c6734ba9e526843e4f9a2ff40f6a8655101e9419ca5c67c3a"} err="failed to get container status \"abac808664a1614c6734ba9e526843e4f9a2ff40f6a8655101e9419ca5c67c3a\": rpc error: code = NotFound desc = could not find container \"abac808664a1614c6734ba9e526843e4f9a2ff40f6a8655101e9419ca5c67c3a\": container with ID starting with abac808664a1614c6734ba9e526843e4f9a2ff40f6a8655101e9419ca5c67c3a not found: ID does not exist"
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.791218    1220 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qzsm2\" (UniqueName: \"kubernetes.io/projected/e9ac504f-2687-4fc9-bc82-285fcdbd1c77-kube-api-access-qzsm2\") on node \"addons-306463\" DevicePath \"\""
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.892187    1220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48rzn\" (UniqueName: \"kubernetes.io/projected/61812c3a-2248-430b-97e8-3b188671e0eb-kube-api-access-48rzn\") pod \"61812c3a-2248-430b-97e8-3b188671e0eb\" (UID: \"61812c3a-2248-430b-97e8-3b188671e0eb\") "
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.896289    1220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61812c3a-2248-430b-97e8-3b188671e0eb-kube-api-access-48rzn" (OuterVolumeSpecName: "kube-api-access-48rzn") pod "61812c3a-2248-430b-97e8-3b188671e0eb" (UID: "61812c3a-2248-430b-97e8-3b188671e0eb"). InnerVolumeSpecName "kube-api-access-48rzn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:40:49 addons-306463 kubelet[1220]: I0910 17:40:49.992957    1220 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-48rzn\" (UniqueName: \"kubernetes.io/projected/61812c3a-2248-430b-97e8-3b188671e0eb-kube-api-access-48rzn\") on node \"addons-306463\" DevicePath \"\""
	Sep 10 17:40:50 addons-306463 kubelet[1220]: I0910 17:40:50.130109    1220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae95440d-a935-441a-9b28-0da2f758cba7" path="/var/lib/kubelet/pods/ae95440d-a935-441a-9b28-0da2f758cba7/volumes"
	Sep 10 17:40:50 addons-306463 kubelet[1220]: I0910 17:40:50.130383    1220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ac504f-2687-4fc9-bc82-285fcdbd1c77" path="/var/lib/kubelet/pods/e9ac504f-2687-4fc9-bc82-285fcdbd1c77/volumes"
	Sep 10 17:40:50 addons-306463 kubelet[1220]: I0910 17:40:50.741458    1220 scope.go:117] "RemoveContainer" containerID="da547a7aead5af7079e6c4408d2460c00c2c3f0ddd2cb2942ec0adff4b01a99b"
	
	
	==> storage-provisioner [bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf] <==
	I0910 17:30:16.804855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:30:16.824584       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:30:16.824662       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 17:30:16.842816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 17:30:16.866442       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-306463_b8efc0b6-d765-4a34-ad76-23431d9ccb05!
	I0910 17:30:16.866012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"337c439a-f46b-493b-9e06-ad4421b197f3", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-306463_b8efc0b6-d765-4a34-ad76-23431d9ccb05 became leader
	I0910 17:30:16.971090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-306463_b8efc0b6-d765-4a34-ad76-23431d9ccb05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-306463 -n addons-306463
helpers_test.go:261: (dbg) Run:  kubectl --context addons-306463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-zp9t8 ingress-nginx-admission-patch-tddrl
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-306463 describe pod busybox ingress-nginx-admission-create-zp9t8 ingress-nginx-admission-patch-tddrl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-306463 describe pod busybox ingress-nginx-admission-create-zp9t8 ingress-nginx-admission-patch-tddrl: exit status 1 (63.579801ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-306463/192.168.39.144
	Start Time:       Tue, 10 Sep 2024 17:31:35 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7msjq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7msjq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m16s                  default-scheduler  Successfully assigned default/busybox to addons-306463
	  Normal   Pulling    7m44s (x4 over 9m16s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m44s (x4 over 9m16s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m44s (x4 over 9m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zp9t8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tddrl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-306463 describe pod busybox ingress-nginx-admission-create-zp9t8 ingress-nginx-admission-patch-tddrl: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (151.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-306463 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-306463 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-306463 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7eaa2d0d-141b-494c-aa38-7e6697727bb4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7eaa2d0d-141b-494c-aa38-7e6697727bb4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005621165s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306463 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.462852223s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-306463 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.144
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-306463 addons disable ingress-dns --alsologtostderr -v=1: (1.659789438s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-306463 addons disable ingress --alsologtostderr -v=1: (7.675630677s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-306463 -n addons-306463
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-306463 logs -n 25: (1.236528686s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-355146                                                                     | download-only-355146 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-545922                                                                     | download-only-545922 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-355146                                                                     | download-only-355146 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-896642 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | binary-mirror-896642                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42249                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-896642                                                                     | binary-mirror-896642 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-306463 --wait=true                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:39 UTC | 10 Sep 24 17:39 UTC |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:39 UTC | 10 Sep 24 17:39 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-306463 ssh cat                                                                       | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | /opt/local-path-provisioner/pvc-2be347cf-fef5-49ec-8123-8e3cf02ab859_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-306463 addons                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-306463 addons                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-306463 ip                                                                            | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | -p addons-306463                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | -p addons-306463                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-306463 ssh curl -s                                                                   | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:41 UTC | 10 Sep 24 17:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-306463 ip                                                                            | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:22.682209   13777 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:22.682460   13777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:22.682468   13777 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:22.682472   13777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:22.682675   13777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:29:22.683208   13777 out.go:352] Setting JSON to false
	I0910 17:29:22.683958   13777 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":715,"bootTime":1725988648,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:22.684008   13777 start.go:139] virtualization: kvm guest
	I0910 17:29:22.685971   13777 out.go:177] * [addons-306463] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:29:22.687151   13777 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:29:22.687158   13777 notify.go:220] Checking for updates...
	I0910 17:29:22.689304   13777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:22.690364   13777 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:29:22.691502   13777 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:22.692665   13777 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:29:22.693954   13777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:29:22.695291   13777 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:22.725551   13777 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 17:29:22.726685   13777 start.go:297] selected driver: kvm2
	I0910 17:29:22.726698   13777 start.go:901] validating driver "kvm2" against <nil>
	I0910 17:29:22.726711   13777 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:29:22.727613   13777 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:22.727695   13777 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:29:22.741833   13777 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:29:22.741873   13777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:22.742090   13777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:29:22.742162   13777 cni.go:84] Creating CNI manager for ""
	I0910 17:29:22.742176   13777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:29:22.742187   13777 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:29:22.742259   13777 start.go:340] cluster config:
	{Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:22.742373   13777 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:22.744027   13777 out.go:177] * Starting "addons-306463" primary control-plane node in "addons-306463" cluster
	I0910 17:29:22.745131   13777 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:29:22.745164   13777 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:29:22.745174   13777 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:22.745247   13777 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:29:22.745259   13777 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:29:22.745636   13777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/config.json ...
	I0910 17:29:22.745666   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/config.json: {Name:mka38f023b13d99d139d0b4b4731421fa1c9c222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:22.745821   13777 start.go:360] acquireMachinesLock for addons-306463: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:29:22.745879   13777 start.go:364] duration metric: took 40.358µs to acquireMachinesLock for "addons-306463"
	I0910 17:29:22.745902   13777 start.go:93] Provisioning new machine with config: &{Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:29:22.745979   13777 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 17:29:22.747590   13777 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0910 17:29:22.747699   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:29:22.747737   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:29:22.761242   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0910 17:29:22.761623   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:29:22.762084   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:29:22.762105   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:29:22.762416   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:29:22.762596   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:22.762723   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:22.762855   13777 start.go:159] libmachine.API.Create for "addons-306463" (driver="kvm2")
	I0910 17:29:22.762901   13777 client.go:168] LocalClient.Create starting
	I0910 17:29:22.762931   13777 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 17:29:22.824214   13777 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 17:29:23.021609   13777 main.go:141] libmachine: Running pre-create checks...
	I0910 17:29:23.021632   13777 main.go:141] libmachine: (addons-306463) Calling .PreCreateCheck
	I0910 17:29:23.022141   13777 main.go:141] libmachine: (addons-306463) Calling .GetConfigRaw
	I0910 17:29:23.022504   13777 main.go:141] libmachine: Creating machine...
	I0910 17:29:23.022515   13777 main.go:141] libmachine: (addons-306463) Calling .Create
	I0910 17:29:23.022671   13777 main.go:141] libmachine: (addons-306463) Creating KVM machine...
	I0910 17:29:23.023879   13777 main.go:141] libmachine: (addons-306463) DBG | found existing default KVM network
	I0910 17:29:23.024609   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.024461   13799 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0910 17:29:23.024628   13777 main.go:141] libmachine: (addons-306463) DBG | created network xml: 
	I0910 17:29:23.024641   13777 main.go:141] libmachine: (addons-306463) DBG | <network>
	I0910 17:29:23.024649   13777 main.go:141] libmachine: (addons-306463) DBG |   <name>mk-addons-306463</name>
	I0910 17:29:23.024662   13777 main.go:141] libmachine: (addons-306463) DBG |   <dns enable='no'/>
	I0910 17:29:23.024669   13777 main.go:141] libmachine: (addons-306463) DBG |   
	I0910 17:29:23.024682   13777 main.go:141] libmachine: (addons-306463) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0910 17:29:23.024693   13777 main.go:141] libmachine: (addons-306463) DBG |     <dhcp>
	I0910 17:29:23.024763   13777 main.go:141] libmachine: (addons-306463) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0910 17:29:23.024789   13777 main.go:141] libmachine: (addons-306463) DBG |     </dhcp>
	I0910 17:29:23.024803   13777 main.go:141] libmachine: (addons-306463) DBG |   </ip>
	I0910 17:29:23.024817   13777 main.go:141] libmachine: (addons-306463) DBG |   
	I0910 17:29:23.024828   13777 main.go:141] libmachine: (addons-306463) DBG | </network>
	I0910 17:29:23.024838   13777 main.go:141] libmachine: (addons-306463) DBG | 
	I0910 17:29:23.029807   13777 main.go:141] libmachine: (addons-306463) DBG | trying to create private KVM network mk-addons-306463 192.168.39.0/24...
	I0910 17:29:23.091118   13777 main.go:141] libmachine: (addons-306463) DBG | private KVM network mk-addons-306463 192.168.39.0/24 created
	I0910 17:29:23.091150   13777 main.go:141] libmachine: (addons-306463) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463 ...
	I0910 17:29:23.091164   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.091073   13799 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:23.091178   13777 main.go:141] libmachine: (addons-306463) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:29:23.091208   13777 main.go:141] libmachine: (addons-306463) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:29:23.339080   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.338953   13799 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa...
	I0910 17:29:23.548665   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.548540   13799 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/addons-306463.rawdisk...
	I0910 17:29:23.548703   13777 main.go:141] libmachine: (addons-306463) DBG | Writing magic tar header
	I0910 17:29:23.548717   13777 main.go:141] libmachine: (addons-306463) DBG | Writing SSH key tar header
	I0910 17:29:23.548730   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.548675   13799 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463 ...
	I0910 17:29:23.548788   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463
	I0910 17:29:23.548813   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463 (perms=drwx------)
	I0910 17:29:23.548826   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 17:29:23.548840   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:23.548846   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 17:29:23.548863   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:29:23.548876   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:29:23.548888   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home
	I0910 17:29:23.548904   13777 main.go:141] libmachine: (addons-306463) DBG | Skipping /home - not owner
	I0910 17:29:23.548918   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:29:23.548931   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 17:29:23.548942   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 17:29:23.548949   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:29:23.548957   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:29:23.548963   13777 main.go:141] libmachine: (addons-306463) Creating domain...
	I0910 17:29:23.549957   13777 main.go:141] libmachine: (addons-306463) define libvirt domain using xml: 
	I0910 17:29:23.549976   13777 main.go:141] libmachine: (addons-306463) <domain type='kvm'>
	I0910 17:29:23.549984   13777 main.go:141] libmachine: (addons-306463)   <name>addons-306463</name>
	I0910 17:29:23.549995   13777 main.go:141] libmachine: (addons-306463)   <memory unit='MiB'>4000</memory>
	I0910 17:29:23.550004   13777 main.go:141] libmachine: (addons-306463)   <vcpu>2</vcpu>
	I0910 17:29:23.550011   13777 main.go:141] libmachine: (addons-306463)   <features>
	I0910 17:29:23.550016   13777 main.go:141] libmachine: (addons-306463)     <acpi/>
	I0910 17:29:23.550023   13777 main.go:141] libmachine: (addons-306463)     <apic/>
	I0910 17:29:23.550027   13777 main.go:141] libmachine: (addons-306463)     <pae/>
	I0910 17:29:23.550031   13777 main.go:141] libmachine: (addons-306463)     
	I0910 17:29:23.550036   13777 main.go:141] libmachine: (addons-306463)   </features>
	I0910 17:29:23.550043   13777 main.go:141] libmachine: (addons-306463)   <cpu mode='host-passthrough'>
	I0910 17:29:23.550050   13777 main.go:141] libmachine: (addons-306463)   
	I0910 17:29:23.550064   13777 main.go:141] libmachine: (addons-306463)   </cpu>
	I0910 17:29:23.550074   13777 main.go:141] libmachine: (addons-306463)   <os>
	I0910 17:29:23.550087   13777 main.go:141] libmachine: (addons-306463)     <type>hvm</type>
	I0910 17:29:23.550095   13777 main.go:141] libmachine: (addons-306463)     <boot dev='cdrom'/>
	I0910 17:29:23.550103   13777 main.go:141] libmachine: (addons-306463)     <boot dev='hd'/>
	I0910 17:29:23.550108   13777 main.go:141] libmachine: (addons-306463)     <bootmenu enable='no'/>
	I0910 17:29:23.550121   13777 main.go:141] libmachine: (addons-306463)   </os>
	I0910 17:29:23.550139   13777 main.go:141] libmachine: (addons-306463)   <devices>
	I0910 17:29:23.550156   13777 main.go:141] libmachine: (addons-306463)     <disk type='file' device='cdrom'>
	I0910 17:29:23.550170   13777 main.go:141] libmachine: (addons-306463)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/boot2docker.iso'/>
	I0910 17:29:23.550179   13777 main.go:141] libmachine: (addons-306463)       <target dev='hdc' bus='scsi'/>
	I0910 17:29:23.550185   13777 main.go:141] libmachine: (addons-306463)       <readonly/>
	I0910 17:29:23.550191   13777 main.go:141] libmachine: (addons-306463)     </disk>
	I0910 17:29:23.550198   13777 main.go:141] libmachine: (addons-306463)     <disk type='file' device='disk'>
	I0910 17:29:23.550206   13777 main.go:141] libmachine: (addons-306463)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:29:23.550221   13777 main.go:141] libmachine: (addons-306463)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/addons-306463.rawdisk'/>
	I0910 17:29:23.550239   13777 main.go:141] libmachine: (addons-306463)       <target dev='hda' bus='virtio'/>
	I0910 17:29:23.550246   13777 main.go:141] libmachine: (addons-306463)     </disk>
	I0910 17:29:23.550252   13777 main.go:141] libmachine: (addons-306463)     <interface type='network'>
	I0910 17:29:23.550256   13777 main.go:141] libmachine: (addons-306463)       <source network='mk-addons-306463'/>
	I0910 17:29:23.550262   13777 main.go:141] libmachine: (addons-306463)       <model type='virtio'/>
	I0910 17:29:23.550268   13777 main.go:141] libmachine: (addons-306463)     </interface>
	I0910 17:29:23.550274   13777 main.go:141] libmachine: (addons-306463)     <interface type='network'>
	I0910 17:29:23.550285   13777 main.go:141] libmachine: (addons-306463)       <source network='default'/>
	I0910 17:29:23.550301   13777 main.go:141] libmachine: (addons-306463)       <model type='virtio'/>
	I0910 17:29:23.550316   13777 main.go:141] libmachine: (addons-306463)     </interface>
	I0910 17:29:23.550326   13777 main.go:141] libmachine: (addons-306463)     <serial type='pty'>
	I0910 17:29:23.550334   13777 main.go:141] libmachine: (addons-306463)       <target port='0'/>
	I0910 17:29:23.550339   13777 main.go:141] libmachine: (addons-306463)     </serial>
	I0910 17:29:23.550346   13777 main.go:141] libmachine: (addons-306463)     <console type='pty'>
	I0910 17:29:23.550352   13777 main.go:141] libmachine: (addons-306463)       <target type='serial' port='0'/>
	I0910 17:29:23.550358   13777 main.go:141] libmachine: (addons-306463)     </console>
	I0910 17:29:23.550364   13777 main.go:141] libmachine: (addons-306463)     <rng model='virtio'>
	I0910 17:29:23.550371   13777 main.go:141] libmachine: (addons-306463)       <backend model='random'>/dev/random</backend>
	I0910 17:29:23.550377   13777 main.go:141] libmachine: (addons-306463)     </rng>
	I0910 17:29:23.550386   13777 main.go:141] libmachine: (addons-306463)     
	I0910 17:29:23.550422   13777 main.go:141] libmachine: (addons-306463)     
	I0910 17:29:23.550446   13777 main.go:141] libmachine: (addons-306463)   </devices>
	I0910 17:29:23.550457   13777 main.go:141] libmachine: (addons-306463) </domain>
	I0910 17:29:23.550464   13777 main.go:141] libmachine: (addons-306463) 
	I0910 17:29:23.555556   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:8a:bf:af in network default
	I0910 17:29:23.556041   13777 main.go:141] libmachine: (addons-306463) Ensuring networks are active...
	I0910 17:29:23.556059   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:23.556675   13777 main.go:141] libmachine: (addons-306463) Ensuring network default is active
	I0910 17:29:23.556973   13777 main.go:141] libmachine: (addons-306463) Ensuring network mk-addons-306463 is active
	I0910 17:29:23.557522   13777 main.go:141] libmachine: (addons-306463) Getting domain xml...
	I0910 17:29:23.558190   13777 main.go:141] libmachine: (addons-306463) Creating domain...
	I0910 17:29:24.925718   13777 main.go:141] libmachine: (addons-306463) Waiting to get IP...
	I0910 17:29:24.926478   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:24.926843   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:24.926877   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:24.926829   13799 retry.go:31] will retry after 244.328706ms: waiting for machine to come up
	I0910 17:29:25.173225   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:25.173645   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:25.173677   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:25.173618   13799 retry.go:31] will retry after 349.863232ms: waiting for machine to come up
	I0910 17:29:25.525116   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:25.525527   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:25.525551   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:25.525492   13799 retry.go:31] will retry after 354.701071ms: waiting for machine to come up
	I0910 17:29:25.881916   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:25.882328   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:25.882350   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:25.882291   13799 retry.go:31] will retry after 411.881959ms: waiting for machine to come up
	I0910 17:29:26.296034   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:26.296469   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:26.296495   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:26.296414   13799 retry.go:31] will retry after 565.67781ms: waiting for machine to come up
	I0910 17:29:26.864221   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:26.864646   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:26.864669   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:26.864638   13799 retry.go:31] will retry after 573.622911ms: waiting for machine to come up
	I0910 17:29:27.439318   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:27.439758   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:27.439778   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:27.439737   13799 retry.go:31] will retry after 813.476344ms: waiting for machine to come up
	I0910 17:29:28.254405   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:28.254862   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:28.254883   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:28.254830   13799 retry.go:31] will retry after 1.15953408s: waiting for machine to come up
	I0910 17:29:29.416144   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:29.416582   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:29.416605   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:29.416548   13799 retry.go:31] will retry after 1.708147643s: waiting for machine to come up
	I0910 17:29:31.127436   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:31.127806   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:31.127832   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:31.127765   13799 retry.go:31] will retry after 2.290831953s: waiting for machine to come up
	I0910 17:29:33.419747   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:33.420078   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:33.420121   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:33.420025   13799 retry.go:31] will retry after 2.583428608s: waiting for machine to come up
	I0910 17:29:36.006176   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:36.006651   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:36.006676   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:36.006622   13799 retry.go:31] will retry after 2.503171234s: waiting for machine to come up
	I0910 17:29:38.511747   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:38.512087   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:38.512126   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:38.512062   13799 retry.go:31] will retry after 3.047981844s: waiting for machine to come up
	I0910 17:29:41.561167   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:41.561635   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:41.561661   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:41.561592   13799 retry.go:31] will retry after 5.416767796s: waiting for machine to come up
	I0910 17:29:46.982824   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:46.983201   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has current primary IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:46.983221   13777 main.go:141] libmachine: (addons-306463) Found IP for machine: 192.168.39.144
	I0910 17:29:46.983236   13777 main.go:141] libmachine: (addons-306463) Reserving static IP address...
	I0910 17:29:46.983568   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find host DHCP lease matching {name: "addons-306463", mac: "52:54:00:74:46:16", ip: "192.168.39.144"} in network mk-addons-306463
	I0910 17:29:47.052549   13777 main.go:141] libmachine: (addons-306463) DBG | Getting to WaitForSSH function...
	I0910 17:29:47.052583   13777 main.go:141] libmachine: (addons-306463) Reserved static IP address: 192.168.39.144
	I0910 17:29:47.052599   13777 main.go:141] libmachine: (addons-306463) Waiting for SSH to be available...
	I0910 17:29:47.055206   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.055721   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.055749   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.055768   13777 main.go:141] libmachine: (addons-306463) DBG | Using SSH client type: external
	I0910 17:29:47.055784   13777 main.go:141] libmachine: (addons-306463) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa (-rw-------)
	I0910 17:29:47.055817   13777 main.go:141] libmachine: (addons-306463) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:29:47.055833   13777 main.go:141] libmachine: (addons-306463) DBG | About to run SSH command:
	I0910 17:29:47.055847   13777 main.go:141] libmachine: (addons-306463) DBG | exit 0
	I0910 17:29:47.189212   13777 main.go:141] libmachine: (addons-306463) DBG | SSH cmd err, output: <nil>: 
	I0910 17:29:47.189498   13777 main.go:141] libmachine: (addons-306463) KVM machine creation complete!
	I0910 17:29:47.189774   13777 main.go:141] libmachine: (addons-306463) Calling .GetConfigRaw
	I0910 17:29:47.190322   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:47.190546   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:47.190703   13777 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:29:47.190718   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:29:47.191953   13777 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:29:47.191983   13777 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:29:47.191990   13777 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:29:47.192000   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.194176   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.194550   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.194580   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.194727   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.194890   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.195040   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.195167   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.195310   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.195466   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.195475   13777 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:29:47.296268   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:29:47.296287   13777 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:29:47.296294   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.298863   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.299207   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.299231   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.299390   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.299581   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.299710   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.299846   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.300038   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.300248   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.300264   13777 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:29:47.401977   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:29:47.402066   13777 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:29:47.402080   13777 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:29:47.402093   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:47.402339   13777 buildroot.go:166] provisioning hostname "addons-306463"
	I0910 17:29:47.402369   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:47.402589   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.404883   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.405227   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.405262   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.405351   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.405496   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.405637   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.405765   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.406035   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.406187   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.406198   13777 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-306463 && echo "addons-306463" | sudo tee /etc/hostname
	I0910 17:29:47.519126   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-306463
	
	I0910 17:29:47.519148   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.521835   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.522126   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.522165   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.522331   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.522503   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.522688   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.522820   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.522981   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.523132   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.523148   13777 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-306463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-306463/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-306463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:29:47.634728   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:29:47.634773   13777 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:29:47.634798   13777 buildroot.go:174] setting up certificates
	I0910 17:29:47.634811   13777 provision.go:84] configureAuth start
	I0910 17:29:47.634820   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:47.635082   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:47.637636   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.638056   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.638081   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.638266   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.640398   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.640703   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.640732   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.640867   13777 provision.go:143] copyHostCerts
	I0910 17:29:47.640932   13777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:29:47.641095   13777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:29:47.641166   13777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:29:47.641219   13777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.addons-306463 san=[127.0.0.1 192.168.39.144 addons-306463 localhost minikube]
	I0910 17:29:47.725425   13777 provision.go:177] copyRemoteCerts
	I0910 17:29:47.725479   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:29:47.725499   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.728270   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.728605   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.728635   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.728841   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.729028   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.729224   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.729412   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:47.812673   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:29:47.838502   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:29:47.861372   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 17:29:47.884280   13777 provision.go:87] duration metric: took 249.455962ms to configureAuth
	I0910 17:29:47.884302   13777 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:29:47.884440   13777 config.go:182] Loaded profile config "addons-306463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:29:47.884509   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.887000   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.887356   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.887385   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.887546   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.887712   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.887871   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.888039   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.888187   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.888352   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.888365   13777 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 17:29:48.228474   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 17:29:48.228497   13777 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:29:48.228507   13777 main.go:141] libmachine: (addons-306463) Calling .GetURL
	I0910 17:29:48.229870   13777 main.go:141] libmachine: (addons-306463) DBG | Using libvirt version 6000000
	I0910 17:29:48.232480   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.232820   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.232841   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.233000   13777 main.go:141] libmachine: Docker is up and running!
	I0910 17:29:48.233010   13777 main.go:141] libmachine: Reticulating splines...
	I0910 17:29:48.233016   13777 client.go:171] duration metric: took 25.470105424s to LocalClient.Create
	I0910 17:29:48.233036   13777 start.go:167] duration metric: took 25.470181661s to libmachine.API.Create "addons-306463"
	I0910 17:29:48.233049   13777 start.go:293] postStartSetup for "addons-306463" (driver="kvm2")
	I0910 17:29:48.233063   13777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:29:48.233098   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.233339   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:29:48.233365   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.235691   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.236027   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.236056   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.236234   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.236415   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.236578   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.236717   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:48.314956   13777 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:29:48.319200   13777 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:29:48.319217   13777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 17:29:48.319286   13777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 17:29:48.319313   13777 start.go:296] duration metric: took 86.256331ms for postStartSetup
	I0910 17:29:48.319357   13777 main.go:141] libmachine: (addons-306463) Calling .GetConfigRaw
	I0910 17:29:48.319875   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:48.322245   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.322628   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.322656   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.322871   13777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/config.json ...
	I0910 17:29:48.323037   13777 start.go:128] duration metric: took 25.577048673s to createHost
	I0910 17:29:48.323063   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.325320   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.325645   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.325671   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.325773   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.325947   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.326098   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.326209   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.326331   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:48.326533   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:48.326545   13777 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:29:48.425744   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725989388.402057522
	
	I0910 17:29:48.425768   13777 fix.go:216] guest clock: 1725989388.402057522
	I0910 17:29:48.425778   13777 fix.go:229] Guest: 2024-09-10 17:29:48.402057522 +0000 UTC Remote: 2024-09-10 17:29:48.323049297 +0000 UTC m=+25.672610756 (delta=79.008225ms)
	I0910 17:29:48.425835   13777 fix.go:200] guest clock delta is within tolerance: 79.008225ms
	I0910 17:29:48.425843   13777 start.go:83] releasing machines lock for "addons-306463", held for 25.679951591s
	I0910 17:29:48.425876   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.426150   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:48.428633   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.428887   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.428917   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.429038   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.429469   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.429618   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.429702   13777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:29:48.429752   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.429808   13777 ssh_runner.go:195] Run: cat /version.json
	I0910 17:29:48.429830   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.432215   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432477   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432509   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.432533   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432629   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.432809   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.432852   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.432885   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432948   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.433024   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.433123   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:48.433223   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.433357   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.433529   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:48.519560   13777 ssh_runner.go:195] Run: systemctl --version
	I0910 17:29:48.543890   13777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 17:29:48.713886   13777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:29:48.719987   13777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:29:48.720039   13777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:29:48.736004   13777 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:29:48.736022   13777 start.go:495] detecting cgroup driver to use...
	I0910 17:29:48.736067   13777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:29:48.752773   13777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:29:48.766717   13777 docker.go:217] disabling cri-docker service (if available) ...
	I0910 17:29:48.766772   13777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 17:29:48.780643   13777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 17:29:48.794503   13777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 17:29:48.918085   13777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 17:29:49.086620   13777 docker.go:233] disabling docker service ...
	I0910 17:29:49.086682   13777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 17:29:49.100274   13777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 17:29:49.112877   13777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 17:29:49.235428   13777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 17:29:49.349493   13777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 17:29:49.363676   13777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:29:49.381290   13777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 17:29:49.381345   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.391264   13777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 17:29:49.391322   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.401028   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.410592   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.420351   13777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:29:49.430171   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.439789   13777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.455759   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.465551   13777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:29:49.474306   13777 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 17:29:49.474354   13777 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 17:29:49.487232   13777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:29:49.496150   13777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:49.606336   13777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 17:29:49.695242   13777 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 17:29:49.695340   13777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 17:29:49.699902   13777 start.go:563] Will wait 60s for crictl version
	I0910 17:29:49.699961   13777 ssh_runner.go:195] Run: which crictl
	I0910 17:29:49.703479   13777 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:29:49.744817   13777 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 17:29:49.744937   13777 ssh_runner.go:195] Run: crio --version
	I0910 17:29:49.773082   13777 ssh_runner.go:195] Run: crio --version
	I0910 17:29:49.804181   13777 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 17:29:49.805563   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:49.808022   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:49.808405   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:49.808439   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:49.808624   13777 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:29:49.812736   13777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:29:49.825102   13777 kubeadm.go:883] updating cluster {Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 17:29:49.825212   13777 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:29:49.825256   13777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:29:49.856852   13777 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 17:29:49.856923   13777 ssh_runner.go:195] Run: which lz4
	I0910 17:29:49.860976   13777 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 17:29:49.865045   13777 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 17:29:49.865078   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 17:29:51.093518   13777 crio.go:462] duration metric: took 1.232563952s to copy over tarball
	I0910 17:29:51.093585   13777 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 17:29:53.221638   13777 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.128025242s)
	I0910 17:29:53.221664   13777 crio.go:469] duration metric: took 2.128123943s to extract the tarball
	I0910 17:29:53.221671   13777 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 17:29:53.258544   13777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:29:53.300100   13777 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 17:29:53.300128   13777 cache_images.go:84] Images are preloaded, skipping loading
	I0910 17:29:53.300138   13777 kubeadm.go:934] updating node { 192.168.39.144 8443 v1.31.0 crio true true} ...
	I0910 17:29:53.300253   13777 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-306463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:29:53.300317   13777 ssh_runner.go:195] Run: crio config
	I0910 17:29:53.353856   13777 cni.go:84] Creating CNI manager for ""
	I0910 17:29:53.353875   13777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:29:53.353885   13777 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 17:29:53.353905   13777 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-306463 NodeName:addons-306463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 17:29:53.354032   13777 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-306463"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 17:29:53.354084   13777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:29:53.364093   13777 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 17:29:53.364159   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 17:29:53.373663   13777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0910 17:29:53.391325   13777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:29:53.408601   13777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0910 17:29:53.428267   13777 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0910 17:29:53.432004   13777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:29:53.443494   13777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:53.565386   13777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:29:53.582101   13777 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463 for IP: 192.168.39.144
	I0910 17:29:53.582140   13777 certs.go:194] generating shared ca certs ...
	I0910 17:29:53.582161   13777 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:53.582320   13777 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 17:29:53.851863   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt ...
	I0910 17:29:53.851887   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt: {Name:mk391b947a0b07d47c3f48605c2169ac6bbd02dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:53.852030   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key ...
	I0910 17:29:53.852040   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key: {Name:mke85b1ed3e4a8e9bbc933ab9200470c82fbf9f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:53.852110   13777 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 17:29:54.025549   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt ...
	I0910 17:29:54.025576   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt: {Name:mkba6d1cf3fb11e6bd8f0b60294ec684bf33d7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.025720   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key ...
	I0910 17:29:54.025730   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key: {Name:mke1e40be102cd0ea85ebf8e9804fe7294de9b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.025806   13777 certs.go:256] generating profile certs ...
	I0910 17:29:54.025854   13777 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.key
	I0910 17:29:54.025873   13777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt with IP's: []
	I0910 17:29:54.256975   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt ...
	I0910 17:29:54.257001   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: {Name:mkddd504fb642c11276cd07fd6115fe4786a05eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.257158   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.key ...
	I0910 17:29:54.257169   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.key: {Name:mkd6342dd54701d46a2aa87d79fc772b251c8012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.257264   13777 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e
	I0910 17:29:54.257283   13777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.144]
	I0910 17:29:54.390720   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e ...
	I0910 17:29:54.390752   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e: {Name:mkef82fca0b89b824a8a6247fbc2d43a96f4692c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.390921   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e ...
	I0910 17:29:54.390940   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e: {Name:mk548882b9e102cf63bf5a2676b5044c14781eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.391030   13777 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt
	I0910 17:29:54.391118   13777 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key
	I0910 17:29:54.391182   13777 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key
	I0910 17:29:54.391204   13777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt with IP's: []
	I0910 17:29:54.752265   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt ...
	I0910 17:29:54.752292   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt: {Name:mkc361744979bc8404f5a5aaa8788af34523a213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.752452   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key ...
	I0910 17:29:54.752468   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key: {Name:mkcded4c85166d07f3f2b1b8ff068b03a9d76311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.752681   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 17:29:54.752717   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:29:54.752753   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:29:54.752785   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 17:29:54.753440   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:29:54.779118   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:29:54.803026   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:29:54.825435   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 17:29:54.848031   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0910 17:29:54.872008   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 17:29:54.897479   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:29:54.922879   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:29:54.947831   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:29:54.974722   13777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 17:29:54.994110   13777 ssh_runner.go:195] Run: openssl version
	I0910 17:29:55.000395   13777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:29:55.013767   13777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:55.018473   13777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:55.018531   13777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:55.024792   13777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:29:55.035682   13777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:29:55.039752   13777 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:29:55.039807   13777 kubeadm.go:392] StartCluster: {Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:55.039892   13777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 17:29:55.039955   13777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 17:29:55.094283   13777 cri.go:89] found id: ""
	I0910 17:29:55.094342   13777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 17:29:55.112402   13777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 17:29:55.123314   13777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 17:29:55.135689   13777 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 17:29:55.135707   13777 kubeadm.go:157] found existing configuration files:
	
	I0910 17:29:55.135753   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 17:29:55.144757   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 17:29:55.144811   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 17:29:55.154051   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 17:29:55.162743   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 17:29:55.162794   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 17:29:55.171799   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 17:29:55.180529   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 17:29:55.180583   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 17:29:55.191873   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 17:29:55.200886   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 17:29:55.200937   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 17:29:55.210181   13777 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 17:29:55.258814   13777 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 17:29:55.258968   13777 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 17:29:55.371415   13777 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 17:29:55.371545   13777 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 17:29:55.371669   13777 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 17:29:55.384083   13777 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 17:29:55.408465   13777 out.go:235]   - Generating certificates and keys ...
	I0910 17:29:55.408589   13777 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 17:29:55.408665   13777 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 17:29:55.897673   13777 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 17:29:56.059223   13777 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 17:29:56.278032   13777 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 17:29:56.441145   13777 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 17:29:56.605793   13777 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 17:29:56.605947   13777 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-306463 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0910 17:29:56.790976   13777 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 17:29:56.791214   13777 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-306463 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0910 17:29:56.836139   13777 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 17:29:57.046320   13777 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 17:29:57.222692   13777 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 17:29:57.222801   13777 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 17:29:57.462021   13777 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 17:29:57.829972   13777 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 17:29:57.954467   13777 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 17:29:58.166081   13777 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 17:29:58.224456   13777 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 17:29:58.224997   13777 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 17:29:58.227323   13777 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 17:29:58.229164   13777 out.go:235]   - Booting up control plane ...
	I0910 17:29:58.229261   13777 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 17:29:58.229329   13777 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 17:29:58.229426   13777 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 17:29:58.245412   13777 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 17:29:58.251271   13777 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 17:29:58.251364   13777 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 17:29:58.388887   13777 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 17:29:58.389039   13777 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 17:29:58.890585   13777 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.078984ms
	I0910 17:29:58.890687   13777 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 17:30:03.392681   13777 kubeadm.go:310] [api-check] The API server is healthy after 4.502932782s
	I0910 17:30:03.406115   13777 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 17:30:03.420124   13777 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 17:30:03.449395   13777 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 17:30:03.449667   13777 kubeadm.go:310] [mark-control-plane] Marking the node addons-306463 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 17:30:03.460309   13777 kubeadm.go:310] [bootstrap-token] Using token: 457t84.d2zxow5i3fyaif8g
	I0910 17:30:03.461609   13777 out.go:235]   - Configuring RBAC rules ...
	I0910 17:30:03.461716   13777 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 17:30:03.465462   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 17:30:03.474356   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 17:30:03.477241   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 17:30:03.483988   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 17:30:03.489715   13777 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 17:30:03.799075   13777 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 17:30:04.227910   13777 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 17:30:04.798072   13777 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 17:30:04.798097   13777 kubeadm.go:310] 
	I0910 17:30:04.798189   13777 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 17:30:04.798211   13777 kubeadm.go:310] 
	I0910 17:30:04.798306   13777 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 17:30:04.798317   13777 kubeadm.go:310] 
	I0910 17:30:04.798366   13777 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 17:30:04.798449   13777 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 17:30:04.798534   13777 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 17:30:04.798547   13777 kubeadm.go:310] 
	I0910 17:30:04.798615   13777 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 17:30:04.798626   13777 kubeadm.go:310] 
	I0910 17:30:04.798664   13777 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 17:30:04.798671   13777 kubeadm.go:310] 
	I0910 17:30:04.798731   13777 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 17:30:04.798795   13777 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 17:30:04.798868   13777 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 17:30:04.798878   13777 kubeadm.go:310] 
	I0910 17:30:04.798966   13777 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 17:30:04.799060   13777 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 17:30:04.799070   13777 kubeadm.go:310] 
	I0910 17:30:04.799182   13777 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 457t84.d2zxow5i3fyaif8g \
	I0910 17:30:04.799300   13777 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 17:30:04.799341   13777 kubeadm.go:310] 	--control-plane 
	I0910 17:30:04.799355   13777 kubeadm.go:310] 
	I0910 17:30:04.799468   13777 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 17:30:04.799478   13777 kubeadm.go:310] 
	I0910 17:30:04.799599   13777 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 457t84.d2zxow5i3fyaif8g \
	I0910 17:30:04.799726   13777 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 17:30:04.800658   13777 kubeadm.go:310] W0910 17:29:55.239705     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:04.800920   13777 kubeadm.go:310] W0910 17:29:55.240584     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:04.801008   13777 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 17:30:04.801028   13777 cni.go:84] Creating CNI manager for ""
	I0910 17:30:04.801040   13777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:30:04.802881   13777 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 17:30:04.804227   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 17:30:04.816674   13777 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 17:30:04.835609   13777 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 17:30:04.835737   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:04.835739   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-306463 minikube.k8s.io/updated_at=2024_09_10T17_30_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=addons-306463 minikube.k8s.io/primary=true
	I0910 17:30:04.865385   13777 ops.go:34] apiserver oom_adj: -16
	I0910 17:30:04.960966   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:05.461285   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:05.961804   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:06.461686   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:06.961554   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:07.461362   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:07.961164   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:08.461339   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:08.961327   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:09.461036   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:09.564661   13777 kubeadm.go:1113] duration metric: took 4.728972481s to wait for elevateKubeSystemPrivileges
	I0910 17:30:09.564692   13777 kubeadm.go:394] duration metric: took 14.524892016s to StartCluster
	I0910 17:30:09.564710   13777 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.564844   13777 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:30:09.565243   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.565462   13777 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:30:09.565495   13777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 17:30:09.565538   13777 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0910 17:30:09.565627   13777 addons.go:69] Setting cloud-spanner=true in profile "addons-306463"
	I0910 17:30:09.565651   13777 addons.go:69] Setting yakd=true in profile "addons-306463"
	I0910 17:30:09.565662   13777 addons.go:234] Setting addon cloud-spanner=true in "addons-306463"
	I0910 17:30:09.565655   13777 addons.go:69] Setting inspektor-gadget=true in profile "addons-306463"
	I0910 17:30:09.565675   13777 addons.go:234] Setting addon yakd=true in "addons-306463"
	I0910 17:30:09.565670   13777 addons.go:69] Setting gcp-auth=true in profile "addons-306463"
	I0910 17:30:09.565685   13777 addons.go:234] Setting addon inspektor-gadget=true in "addons-306463"
	I0910 17:30:09.565692   13777 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-306463"
	I0910 17:30:09.565703   13777 mustload.go:65] Loading cluster: addons-306463
	I0910 17:30:09.565700   13777 config.go:182] Loaded profile config "addons-306463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:30:09.565711   13777 addons.go:69] Setting metrics-server=true in profile "addons-306463"
	I0910 17:30:09.565715   13777 addons.go:69] Setting helm-tiller=true in profile "addons-306463"
	I0910 17:30:09.565720   13777 addons.go:69] Setting storage-provisioner=true in profile "addons-306463"
	I0910 17:30:09.565734   13777 addons.go:234] Setting addon metrics-server=true in "addons-306463"
	I0910 17:30:09.565738   13777 addons.go:234] Setting addon storage-provisioner=true in "addons-306463"
	I0910 17:30:09.565740   13777 addons.go:69] Setting ingress=true in profile "addons-306463"
	I0910 17:30:09.565740   13777 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-306463"
	I0910 17:30:09.565753   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565754   13777 addons.go:69] Setting volcano=true in profile "addons-306463"
	I0910 17:30:09.565760   13777 addons.go:69] Setting ingress-dns=true in profile "addons-306463"
	I0910 17:30:09.565765   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565765   13777 addons.go:69] Setting registry=true in profile "addons-306463"
	I0910 17:30:09.565776   13777 addons.go:234] Setting addon volcano=true in "addons-306463"
	I0910 17:30:09.565760   13777 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-306463"
	I0910 17:30:09.565783   13777 addons.go:234] Setting addon registry=true in "addons-306463"
	I0910 17:30:09.565793   13777 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-306463"
	I0910 17:30:09.565801   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565809   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565810   13777 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-306463"
	I0910 17:30:09.565834   13777 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-306463"
	I0910 17:30:09.565735   13777 addons.go:234] Setting addon helm-tiller=true in "addons-306463"
	I0910 17:30:09.565889   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565897   13777 config.go:182] Loaded profile config "addons-306463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:30:09.565777   13777 addons.go:234] Setting addon ingress-dns=true in "addons-306463"
	I0910 17:30:09.566180   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566186   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566191   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566190   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566210   13777 addons.go:69] Setting default-storageclass=true in profile "addons-306463"
	I0910 17:30:09.566212   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566220   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566224   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566226   13777 addons.go:69] Setting volumesnapshots=true in profile "addons-306463"
	I0910 17:30:09.565707   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565756   13777 addons.go:234] Setting addon ingress=true in "addons-306463"
	I0910 17:30:09.566246   13777 addons.go:234] Setting addon volumesnapshots=true in "addons-306463"
	I0910 17:30:09.565705   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565801   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566276   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566214   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566431   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.565756   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566494   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566227   13777 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-306463"
	I0910 17:30:09.565709   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566515   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566518   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566594   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566617   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566232   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566712   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566737   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566765   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566781   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566800   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566821   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566831   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566843   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566802   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566880   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566882   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566891   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566902   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566910   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566935   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.567017   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.567048   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.567756   13777 out.go:177] * Verifying Kubernetes components...
	I0910 17:30:09.569434   13777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:09.582777   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0910 17:30:09.589426   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.589457   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.589941   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.591066   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.591086   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.593346   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.593990   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.594031   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.614952   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0910 17:30:09.615511   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.616077   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.616100   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.625500   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.626139   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.626180   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.626663   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0910 17:30:09.627167   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.627742   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.627760   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.628137   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.628731   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.628754   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.628942   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I0910 17:30:09.629508   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.629998   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.630014   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.630491   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.631027   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.631063   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.631232   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33169
	I0910 17:30:09.631984   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.632597   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.632614   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.633144   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0910 17:30:09.633568   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.634036   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.634051   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.634409   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.634947   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.634984   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.635276   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.635474   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.639823   13777 addons.go:234] Setting addon default-storageclass=true in "addons-306463"
	I0910 17:30:09.639870   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.640208   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.640228   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.649585   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0910 17:30:09.650122   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.650724   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.650742   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.651106   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.651353   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43517
	I0910 17:30:09.651675   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.651705   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.651834   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.652091   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0910 17:30:09.652330   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.652346   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.652505   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.653024   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.653041   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.653481   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I0910 17:30:09.653910   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.654114   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.654913   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32845
	I0910 17:30:09.655435   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.655964   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.655981   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.656044   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.656117   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.656812   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.656832   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.657418   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.657493   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0910 17:30:09.657907   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.658557   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.658600   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.658821   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0910 17:30:09.659275   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.659751   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.659768   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.660535   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.660593   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0910 17:30:09.661560   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.661593   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.661831   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.661907   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.662410   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.662439   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.662442   13777 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0910 17:30:09.662415   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.662611   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.662676   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0910 17:30:09.662687   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.663387   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.663450   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.663526   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.663886   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.664005   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.664015   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.664124   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.664133   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.664307   13777 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:09.664322   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0910 17:30:09.664338   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.664427   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.664960   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.665000   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.665625   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.665808   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.666537   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.666894   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.666927   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.667412   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0910 17:30:09.667675   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.668696   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.669275   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.669291   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.669343   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.669546   13777 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0910 17:30:09.670692   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 17:30:09.670708   13777 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 17:30:09.670727   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.670952   13777 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-306463"
	I0910 17:30:09.670991   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.671783   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.671816   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.672717   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.673017   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.673445   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.673492   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.673650   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.673854   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.674003   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.676862   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.676873   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0910 17:30:09.676918   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0910 17:30:09.676994   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.677003   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.677025   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.677041   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.677261   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.677376   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.677625   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.677718   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.678066   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44557
	I0910 17:30:09.678469   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.678717   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.678737   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.678906   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.678926   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.679232   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.679271   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.679735   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.679770   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.679844   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.679855   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.680043   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.680698   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.681570   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.681611   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.681815   13777 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0910 17:30:09.681916   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.682688   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.682726   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.683190   13777 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:09.683203   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0910 17:30:09.683218   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.686842   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.687460   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.687482   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.687670   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.687848   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.688024   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.688177   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.694726   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0910 17:30:09.695273   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.695643   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36063
	I0910 17:30:09.696099   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.696281   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.696293   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.696679   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.696746   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
	I0910 17:30:09.696887   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.698037   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.698762   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.698922   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.698941   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.699119   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.699136   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.699179   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0910 17:30:09.699522   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.699585   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.699840   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.700601   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.700644   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.700874   13777 out.go:177]   - Using image docker.io/registry:2.8.3
	I0910 17:30:09.700998   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.701016   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I0910 17:30:09.701360   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.701612   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I0910 17:30:09.701832   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.701844   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.702101   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.702118   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.702224   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.702441   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.703052   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.703125   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.703591   13777 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0910 17:30:09.704094   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.704109   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.704260   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0910 17:30:09.704704   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.704740   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.704775   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.705063   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0910 17:30:09.705196   13777 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 17:30:09.705211   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0910 17:30:09.705219   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0910 17:30:09.705226   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.705196   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.705342   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.706377   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.706400   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 17:30:09.706411   13777 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0910 17:30:09.706426   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.706440   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.706471   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.706482   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.707075   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.707216   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.707235   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.707300   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.707366   13777 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0910 17:30:09.707624   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.707822   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.708675   13777 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:09.708690   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0910 17:30:09.708705   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.712661   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713131   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.713163   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713366   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.713421   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.713480   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713861   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:09.713873   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:09.713918   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.713956   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713983   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.714002   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.714031   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.714206   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.714247   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.714468   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:09.714499   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0910 17:30:09.714592   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:09.714604   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:09.714613   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:09.714627   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:09.714682   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.714871   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.714961   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.714997   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.715045   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.715064   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.715156   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.715206   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.715419   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.715432   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.715492   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:09.715508   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:09.715557   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	W0910 17:30:09.715586   13777 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0910 17:30:09.715674   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.715712   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.715796   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.716017   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.716559   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:09.716638   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0910 17:30:09.717659   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.717965   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.718259   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.719379   13777 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0910 17:30:09.719428   13777 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 17:30:09.719443   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0910 17:30:09.719454   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0910 17:30:09.720905   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I0910 17:30:09.721013   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0910 17:30:09.721027   13777 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0910 17:30:09.721044   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.721066   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0910 17:30:09.721206   13777 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:09.721216   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 17:30:09.721229   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.721849   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.722165   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.722359   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.722466   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.722470   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0910 17:30:09.722708   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:09.722753   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0910 17:30:09.723597   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.723648   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.723855   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.724282   13777 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:09.724307   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0910 17:30:09.724324   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.724525   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.725165   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0910 17:30:09.725201   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.725218   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.725561   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.726077   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.726104   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.726140   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.726215   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.726601   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.726630   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.726642   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.726678   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.726725   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.726825   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.727007   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.727185   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.727319   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.727446   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.727475   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.727554   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0910 17:30:09.727608   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.727780   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.728076   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.728343   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.728947   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.729258   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.729880   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.729952   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.730000   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.730827   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0910 17:30:09.731231   13777 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0910 17:30:09.731583   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
	I0910 17:30:09.731692   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.732073   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.732112   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.732762   13777 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0910 17:30:09.732777   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0910 17:30:09.732794   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.733213   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.733241   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.733392   13777 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0910 17:30:09.733608   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.733837   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0910 17:30:09.733864   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.734595   13777 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0910 17:30:09.733877   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.734613   13777 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0910 17:30:09.734632   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.734774   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.736617   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0910 17:30:09.737387   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.737645   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.737692   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 17:30:09.737715   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0910 17:30:09.737739   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.737924   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.737974   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.738098   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.738264   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.738435   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.738443   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.738478   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.738597   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.738607   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.738839   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.738982   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.739120   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.740323   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
	I0910 17:30:09.740652   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.740693   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.741101   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.741129   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.741227   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.741442   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.741462   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.741464   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.741593   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.741743   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.741743   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.741915   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.743141   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.743345   13777 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:09.743359   13777 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 17:30:09.743372   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.746708   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.746740   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.746763   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.746782   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.746853   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.746981   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.747118   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	W0910 17:30:09.748150   13777 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56800->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.748170   13777 retry.go:31] will retry after 285.141352ms: ssh: handshake failed: read tcp 192.168.39.1:56800->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.753685   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I0910 17:30:09.753988   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.754407   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.754424   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.754715   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.754955   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.756271   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.758237   13777 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0910 17:30:09.759829   13777 out.go:177]   - Using image docker.io/busybox:stable
	I0910 17:30:09.761821   13777 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:09.761840   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0910 17:30:09.761857   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.764453   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.764819   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.764843   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.764947   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.765134   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.765249   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.765359   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	W0910 17:30:09.765990   13777 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56802->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.766007   13777 retry.go:31] will retry after 202.018394ms: ssh: handshake failed: read tcp 192.168.39.1:56802->192.168.39.144:22: read: connection reset by peer
	W0910 17:30:09.969022   13777 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56808->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.969051   13777 retry.go:31] will retry after 235.947645ms: ssh: handshake failed: read tcp 192.168.39.1:56808->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:10.094763   13777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:30:10.094906   13777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 17:30:10.122256   13777 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 17:30:10.122278   13777 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0910 17:30:10.186667   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:10.191366   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:10.193981   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 17:30:10.193996   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0910 17:30:10.259618   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:10.270667   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0910 17:30:10.270685   13777 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0910 17:30:10.276555   13777 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0910 17:30:10.276571   13777 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0910 17:30:10.310365   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 17:30:10.310384   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0910 17:30:10.315555   13777 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 17:30:10.315573   13777 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0910 17:30:10.352407   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:10.369092   13777 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 17:30:10.369117   13777 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0910 17:30:10.381559   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:10.401157   13777 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0910 17:30:10.401178   13777 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0910 17:30:10.403491   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 17:30:10.403515   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0910 17:30:10.472910   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0910 17:30:10.472930   13777 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0910 17:30:10.489850   13777 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:10.489869   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0910 17:30:10.511021   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:10.534214   13777 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:10.534238   13777 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0910 17:30:10.554150   13777 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0910 17:30:10.554167   13777 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0910 17:30:10.557521   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 17:30:10.557543   13777 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 17:30:10.572746   13777 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 17:30:10.572764   13777 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0910 17:30:10.573994   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 17:30:10.574011   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0910 17:30:10.704085   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0910 17:30:10.704110   13777 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0910 17:30:10.727766   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:10.747348   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:10.747374   13777 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 17:30:10.763336   13777 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 17:30:10.763355   13777 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0910 17:30:10.766511   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:10.774570   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 17:30:10.774593   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0910 17:30:10.782428   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 17:30:10.782444   13777 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0910 17:30:10.809598   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:11.063857   13777 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 17:30:11.063892   13777 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0910 17:30:11.074085   13777 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:11.074112   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0910 17:30:11.088999   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 17:30:11.089024   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0910 17:30:11.100617   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:11.112993   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:11.113018   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0910 17:30:11.298472   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 17:30:11.298502   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0910 17:30:11.316663   13777 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 17:30:11.316693   13777 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0910 17:30:11.369539   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:11.383347   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:11.653526   13777 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0910 17:30:11.653554   13777 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0910 17:30:11.678871   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 17:30:11.678895   13777 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0910 17:30:11.862075   13777 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:11.862095   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0910 17:30:11.921871   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 17:30:11.921897   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0910 17:30:12.123524   13777 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.028712837s)
	I0910 17:30:12.123546   13777 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.02861212s)
	I0910 17:30:12.123568   13777 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0910 17:30:12.138011   13777 node_ready.go:35] waiting up to 6m0s for node "addons-306463" to be "Ready" ...
	I0910 17:30:12.143070   13777 node_ready.go:49] node "addons-306463" has status "Ready":"True"
	I0910 17:30:12.143098   13777 node_ready.go:38] duration metric: took 5.040837ms for node "addons-306463" to be "Ready" ...
	I0910 17:30:12.143109   13777 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:12.155112   13777 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:12.301578   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 17:30:12.301604   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0910 17:30:12.345205   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:12.640873   13777 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-306463" context rescaled to 1 replicas
	I0910 17:30:12.648121   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:12.648142   13777 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0910 17:30:13.153205   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:13.916729   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.73001943s)
	I0910 17:30:13.916745   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.725354593s)
	I0910 17:30:13.916787   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.916800   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.916812   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.916818   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.657160792s)
	I0910 17:30:13.916832   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.916840   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.916849   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.917138   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917155   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917164   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.917162   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:13.917172   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.917292   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:13.917292   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917312   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917321   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.917329   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.917336   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917347   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917419   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:13.917426   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917458   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917492   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.917516   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.919078   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.919092   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.919112   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.919122   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.919092   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:14.275505   13777 pod_ready.go:103] pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:14.583313   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.230869529s)
	I0910 17:30:14.583362   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:14.583374   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:14.583656   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:14.583673   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:14.583683   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:14.583691   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:14.583884   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:14.583898   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:16.178328   13777 pod_ready.go:93] pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:16.178361   13777 pod_ready.go:82] duration metric: took 4.02322283s for pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.178376   13777 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.744986   13777 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0910 17:30:16.745032   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:16.748322   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:16.748729   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:16.748755   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:16.748928   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:16.749117   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:16.749277   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:16.749413   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:16.985599   13777 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0910 17:30:17.019642   13777 addons.go:234] Setting addon gcp-auth=true in "addons-306463"
	I0910 17:30:17.019684   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:17.020002   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:17.020027   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:17.035756   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41545
	I0910 17:30:17.036129   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:17.036614   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:17.036638   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:17.036957   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:17.037567   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:17.037606   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:17.052624   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I0910 17:30:17.053092   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:17.053555   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:17.053575   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:17.053874   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:17.054058   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:17.055568   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:17.055797   13777 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0910 17:30:17.055824   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:17.058347   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:17.058720   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:17.058755   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:17.058878   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:17.059056   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:17.059232   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:17.059408   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:18.294928   13777 pod_ready.go:103] pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:18.793144   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.411553079s)
	I0910 17:30:18.793145   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.282095983s)
	I0910 17:30:18.793236   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.065445297s)
	I0910 17:30:18.793270   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793187   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793285   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793310   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793340   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.026800859s)
	I0910 17:30:18.793371   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793387   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793269   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793447   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793468   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.692800645s)
	I0910 17:30:18.793374   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.983746942s)
	I0910 17:30:18.793499   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793508   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793513   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793517   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793601   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.424038762s)
	I0910 17:30:18.793624   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793633   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793677   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.793701   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.793737   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793764   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793796   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.410424596s)
	W0910 17:30:18.793833   13777 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:18.793860   13777 retry.go:31] will retry after 281.684636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:18.793941   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.448707771s)
	I0910 17:30:18.793961   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793971   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.794043   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.794051   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.794058   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.794066   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795483   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795531   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795547   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795569   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795575   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795583   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795590   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795649   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795657   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795658   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795665   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795672   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795682   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795689   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795696   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795703   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795713   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795732   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795744   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795751   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795757   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795762   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795771   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795781   13777 addons.go:475] Verifying addon ingress=true in "addons-306463"
	I0910 17:30:18.795793   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795812   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795818   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795824   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795830   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795884   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795900   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795908   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795914   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795971   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796000   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796018   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796031   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796038   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796047   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796055   13777 addons.go:475] Verifying addon metrics-server=true in "addons-306463"
	I0910 17:30:18.796152   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796021   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796451   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796481   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796495   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796938   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796966   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796973   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796992   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.797004   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.797213   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.797217   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.797239   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.797246   13777 addons.go:475] Verifying addon registry=true in "addons-306463"
	I0910 17:30:18.795865   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.798742   13777 out.go:177] * Verifying ingress addon...
	I0910 17:30:18.799682   13777 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-306463 service yakd-dashboard -n yakd-dashboard
	
	I0910 17:30:18.799716   13777 out.go:177] * Verifying registry addon...
	I0910 17:30:18.801342   13777 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0910 17:30:18.802106   13777 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0910 17:30:18.809767   13777 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0910 17:30:18.809787   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:18.811444   13777 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0910 17:30:18.811469   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:18.826959   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.826981   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.827246   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.827267   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	W0910 17:30:18.827341   13777 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0910 17:30:18.834146   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.834161   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.834395   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.834415   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.834429   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:19.076009   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:19.326915   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:19.327040   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:19.615946   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.462685919s)
	I0910 17:30:19.616011   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:19.616033   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:19.615967   13777 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.560143893s)
	I0910 17:30:19.616447   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:19.616479   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:19.616503   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:19.616512   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:19.616521   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:19.616744   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:19.616759   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:19.616776   13777 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-306463"
	I0910 17:30:19.617622   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:19.618428   13777 out.go:177] * Verifying csi-hostpath-driver addon...
	I0910 17:30:19.620045   13777 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0910 17:30:19.621038   13777 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 17:30:19.621222   13777 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 17:30:19.621237   13777 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0910 17:30:19.662236   13777 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 17:30:19.662270   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:19.722439   13777 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 17:30:19.722462   13777 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0910 17:30:19.763288   13777 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:19.763308   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0910 17:30:19.814766   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:19.815036   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:19.834549   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:20.128981   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:20.307489   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:20.307877   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:20.625102   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:20.683791   13777 pod_ready.go:103] pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:20.806684   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:20.806816   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:20.823709   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.747658678s)
	I0910 17:30:20.823758   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:20.823770   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:20.824016   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:20.824031   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:20.824040   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:20.824048   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:20.824246   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:20.824312   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:20.824334   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:21.152748   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:21.258310   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.423679033s)
	I0910 17:30:21.258353   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:21.258363   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:21.258652   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:21.258672   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:21.258675   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:21.258682   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:21.258781   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:21.259002   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:21.259047   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:21.259050   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:21.261123   13777 addons.go:475] Verifying addon gcp-auth=true in "addons-306463"
	I0910 17:30:21.262702   13777 out.go:177] * Verifying gcp-auth addon...
	I0910 17:30:21.265139   13777 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0910 17:30:21.309290   13777 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:30:21.309307   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:21.386582   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:21.386884   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:21.629140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:21.686431   13777 pod_ready.go:98] pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.144 HostIPs:[{IP:192.168.39
.144}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-10 17:30:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:13 +0000 UTC,FinishedAt:2024-09-10 17:30:19 +0000 UTC,ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa Started:0xc00269b790 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d3ebf0} {Name:kube-api-access-vvw44 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001d3ec10}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:21.686462   13777 pod_ready.go:82] duration metric: took 5.508078868s for pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace to be "Ready" ...
	E0910 17:30:21.686473   13777 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.144 HostIPs:[{IP:192.168.39.144}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-10 17:30:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:13 +0000 UTC,FinishedAt:2024-09-10 17:30:19 +0000 UTC,ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa Started:0xc00269b790 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d3ebf0} {Name:kube-api-access-vvw44 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001d3ec10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:21.686485   13777 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.694377   13777 pod_ready.go:93] pod "etcd-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.694399   13777 pod_ready.go:82] duration metric: took 7.904964ms for pod "etcd-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.694410   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.699906   13777 pod_ready.go:93] pod "kube-apiserver-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.699925   13777 pod_ready.go:82] duration metric: took 5.506518ms for pod "kube-apiserver-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.699935   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.706491   13777 pod_ready.go:93] pod "kube-controller-manager-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.706508   13777 pod_ready.go:82] duration metric: took 6.56701ms for pod "kube-controller-manager-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.706517   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-js72f" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.711913   13777 pod_ready.go:93] pod "kube-proxy-js72f" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.711927   13777 pod_ready.go:82] duration metric: took 5.405396ms for pod "kube-proxy-js72f" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.711934   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.771105   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:21.806408   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:21.807158   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:22.082652   13777 pod_ready.go:93] pod "kube-scheduler-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:22.082672   13777 pod_ready.go:82] duration metric: took 370.731346ms for pod "kube-scheduler-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:22.082683   13777 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:22.127515   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:22.269247   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:22.306663   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:22.306817   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:22.626885   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:22.769155   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:22.806860   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:22.807059   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:23.126514   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:23.268573   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:23.304984   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:23.308344   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:23.625436   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:23.768625   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:23.806414   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:23.807737   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:24.089626   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:24.126099   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:24.269316   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:24.306325   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:24.307191   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:24.626187   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:24.769060   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:24.805608   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:24.805998   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.284162   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:25.284693   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:25.304402   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:25.305601   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.625547   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:25.769118   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:25.805736   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.806413   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:26.125645   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:26.269608   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:26.307645   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:26.310692   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:26.588316   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:26.625476   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:26.768854   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:26.805985   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:26.806757   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.126110   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:27.268618   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:27.305185   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:27.305610   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.625855   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:27.768850   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:27.806424   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.806708   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:28.126113   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:28.269445   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:28.306451   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:28.306949   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:28.589535   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:28.625966   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:28.769016   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:28.805194   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:28.806093   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:29.125865   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:29.268979   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:29.306285   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.307264   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:29.625480   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:29.768316   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:29.807378   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.807652   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:30.126183   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:30.268852   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:30.307999   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.309034   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:30.625705   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:30.768655   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:30.807245   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.807772   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.088566   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:31.125747   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:31.268110   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:31.309583   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.310629   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:31.665764   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:31.768905   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:31.804955   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.806706   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.125989   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:32.269609   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:32.307383   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:32.309129   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.626614   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:32.768068   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:32.806872   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.807203   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:33.089535   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:33.125706   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.269256   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:33.305975   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.306252   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:33.706857   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.769189   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:33.805877   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.808046   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:34.126107   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.269399   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:34.306128   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:34.306283   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.625316   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.769118   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:34.805784   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.806308   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:35.131152   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.269262   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:35.305790   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.306213   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:35.587677   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:35.626384   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.769202   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:35.806266   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.806509   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.127407   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.270434   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:36.310101   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:36.311099   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.590031   13777 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:36.590052   13777 pod_ready.go:82] duration metric: took 14.507363417s for pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:36.590060   13777 pod_ready.go:39] duration metric: took 24.446938548s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:36.590077   13777 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:30:36.590151   13777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:36.618197   13777 api_server.go:72] duration metric: took 27.052704342s to wait for apiserver process to appear ...
	I0910 17:30:36.618222   13777 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:30:36.618255   13777 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0910 17:30:36.624545   13777 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0910 17:30:36.625767   13777 api_server.go:141] control plane version: v1.31.0
	I0910 17:30:36.625787   13777 api_server.go:131] duration metric: took 7.55866ms to wait for apiserver health ...
	I0910 17:30:36.625795   13777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:30:36.628168   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.635782   13777 system_pods.go:59] 18 kube-system pods found
	I0910 17:30:36.635816   13777 system_pods.go:61] "coredns-6f6b679f8f-c5qxp" [5ce9784e-e567-4ff5-a7fc-cb8589c471c1] Running
	I0910 17:30:36.635828   13777 system_pods.go:61] "csi-hostpath-attacher-0" [e5afcda1-955a-445a-95b8-dc286510fa6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:36.635837   13777 system_pods.go:61] "csi-hostpath-resizer-0" [5ab24cbf-8d77-43c3-9db2-6e06eed48352] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:36.635848   13777 system_pods.go:61] "csi-hostpathplugin-8hg5b" [f919643c-2604-4be0-8895-fe335d9c578a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:36.635853   13777 system_pods.go:61] "etcd-addons-306463" [dd177bb5-fe2a-4136-a871-92cd0f322fce] Running
	I0910 17:30:36.635862   13777 system_pods.go:61] "kube-apiserver-addons-306463" [7c3b5014-0b97-43e9-b162-3856dabfa5c1] Running
	I0910 17:30:36.635868   13777 system_pods.go:61] "kube-controller-manager-addons-306463" [bd143d52-b147-4e2b-8221-4b4c215500f8] Running
	I0910 17:30:36.635878   13777 system_pods.go:61] "kube-ingress-dns-minikube" [33998c91-0157-46f1-aa90-c6001166fff3] Running
	I0910 17:30:36.635884   13777 system_pods.go:61] "kube-proxy-js72f" [97604350-aebe-4a6c-b687-0204de19c3f5] Running
	I0910 17:30:36.635890   13777 system_pods.go:61] "kube-scheduler-addons-306463" [6eb6466c-c3d4-4e16-b246-c964865de3f6] Running
	I0910 17:30:36.635900   13777 system_pods.go:61] "metrics-server-84c5f94fbc-q6wcq" [4dc23d17-89f0-47a5-8880-0cf317f8a901] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:36.635909   13777 system_pods.go:61] "nvidia-device-plugin-daemonset-smwnt" [cf2f1df4-c2cd-4ab3-927a-16595a20e831] Running
	I0910 17:30:36.635921   13777 system_pods.go:61] "registry-66c9cd494c-6qxxb" [e9ac504f-2687-4fc9-bc82-285fcdbd1c77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:36.635932   13777 system_pods.go:61] "registry-proxy-dmz6w" [61812c3a-2248-430b-97e8-3b188671e0eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:36.635944   13777 system_pods.go:61] "snapshot-controller-56fcc65765-nnnw7" [5edd6128-e9f7-431b-822d-49f5ef92d0af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.635956   13777 system_pods.go:61] "snapshot-controller-56fcc65765-w9ln4" [1a1094b3-ec64-4401-b8f6-8812fa8ed85d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.635965   13777 system_pods.go:61] "storage-provisioner" [6196330e-c966-44c2-aedd-6dc5e570c6e5] Running
	I0910 17:30:36.635976   13777 system_pods.go:61] "tiller-deploy-b48cc5f79-4jxbr" [1dfb2d44-f679-47b9-8f2d-4d144742e3a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:36.635989   13777 system_pods.go:74] duration metric: took 10.187442ms to wait for pod list to return data ...
	I0910 17:30:36.636002   13777 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:30:36.640110   13777 default_sa.go:45] found service account: "default"
	I0910 17:30:36.640132   13777 default_sa.go:55] duration metric: took 4.119977ms for default service account to be created ...
	I0910 17:30:36.640142   13777 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:30:36.647574   13777 system_pods.go:86] 18 kube-system pods found
	I0910 17:30:36.647597   13777 system_pods.go:89] "coredns-6f6b679f8f-c5qxp" [5ce9784e-e567-4ff5-a7fc-cb8589c471c1] Running
	I0910 17:30:36.647606   13777 system_pods.go:89] "csi-hostpath-attacher-0" [e5afcda1-955a-445a-95b8-dc286510fa6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:36.647612   13777 system_pods.go:89] "csi-hostpath-resizer-0" [5ab24cbf-8d77-43c3-9db2-6e06eed48352] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:36.647620   13777 system_pods.go:89] "csi-hostpathplugin-8hg5b" [f919643c-2604-4be0-8895-fe335d9c578a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:36.647624   13777 system_pods.go:89] "etcd-addons-306463" [dd177bb5-fe2a-4136-a871-92cd0f322fce] Running
	I0910 17:30:36.647629   13777 system_pods.go:89] "kube-apiserver-addons-306463" [7c3b5014-0b97-43e9-b162-3856dabfa5c1] Running
	I0910 17:30:36.647632   13777 system_pods.go:89] "kube-controller-manager-addons-306463" [bd143d52-b147-4e2b-8221-4b4c215500f8] Running
	I0910 17:30:36.647637   13777 system_pods.go:89] "kube-ingress-dns-minikube" [33998c91-0157-46f1-aa90-c6001166fff3] Running
	I0910 17:30:36.647640   13777 system_pods.go:89] "kube-proxy-js72f" [97604350-aebe-4a6c-b687-0204de19c3f5] Running
	I0910 17:30:36.647644   13777 system_pods.go:89] "kube-scheduler-addons-306463" [6eb6466c-c3d4-4e16-b246-c964865de3f6] Running
	I0910 17:30:36.647649   13777 system_pods.go:89] "metrics-server-84c5f94fbc-q6wcq" [4dc23d17-89f0-47a5-8880-0cf317f8a901] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:36.647653   13777 system_pods.go:89] "nvidia-device-plugin-daemonset-smwnt" [cf2f1df4-c2cd-4ab3-927a-16595a20e831] Running
	I0910 17:30:36.647660   13777 system_pods.go:89] "registry-66c9cd494c-6qxxb" [e9ac504f-2687-4fc9-bc82-285fcdbd1c77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:36.647668   13777 system_pods.go:89] "registry-proxy-dmz6w" [61812c3a-2248-430b-97e8-3b188671e0eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:36.647676   13777 system_pods.go:89] "snapshot-controller-56fcc65765-nnnw7" [5edd6128-e9f7-431b-822d-49f5ef92d0af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.647684   13777 system_pods.go:89] "snapshot-controller-56fcc65765-w9ln4" [1a1094b3-ec64-4401-b8f6-8812fa8ed85d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.647688   13777 system_pods.go:89] "storage-provisioner" [6196330e-c966-44c2-aedd-6dc5e570c6e5] Running
	I0910 17:30:36.647693   13777 system_pods.go:89] "tiller-deploy-b48cc5f79-4jxbr" [1dfb2d44-f679-47b9-8f2d-4d144742e3a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:36.647702   13777 system_pods.go:126] duration metric: took 7.55431ms to wait for k8s-apps to be running ...
	I0910 17:30:36.647708   13777 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:30:36.647747   13777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:30:36.688724   13777 system_svc.go:56] duration metric: took 40.998614ms WaitForService to wait for kubelet
	I0910 17:30:36.688757   13777 kubeadm.go:582] duration metric: took 27.123268565s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:30:36.688785   13777 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:30:36.692318   13777 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:30:36.692341   13777 node_conditions.go:123] node cpu capacity is 2
	I0910 17:30:36.692353   13777 node_conditions.go:105] duration metric: took 3.562021ms to run NodePressure ...
	I0910 17:30:36.692364   13777 start.go:241] waiting for startup goroutines ...
	I0910 17:30:36.769013   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:36.805343   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.807812   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.125928   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.268408   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:37.307358   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.307370   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:37.626450   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.769104   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:37.807631   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.808032   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:38.410369   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:38.410675   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:38.410845   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:38.411724   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.626551   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.772173   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:38.813605   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:38.813975   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:39.126089   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.268594   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:39.306434   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:39.307212   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:39.627575   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.769119   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:39.806793   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:39.806955   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:40.126013   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.269594   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:40.307652   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:40.308116   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:40.626874   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.772237   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:40.809133   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:40.810841   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:41.126532   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.268653   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:41.310669   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:41.310958   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:41.638682   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.769185   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:41.805908   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:41.805996   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.125541   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.274727   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:42.314152   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:42.314527   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.625893   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.769480   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:42.805680   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.812721   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.125909   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.269084   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:43.306576   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.306976   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:43.715505   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.771618   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:43.805941   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.806723   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.124772   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.269280   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:44.306120   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:44.306950   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.625991   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.768665   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:44.805454   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.807495   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.126730   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.269364   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:45.306168   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.306714   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.631613   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.880383   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:45.883658   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.884726   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.127460   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.269296   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:46.306086   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:46.306509   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.625344   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.769098   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:46.806534   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.806996   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.124955   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.268498   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:47.306845   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.307880   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.626319   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.769012   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:47.806321   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.807436   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.125713   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.268906   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:48.306844   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:48.307565   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.626864   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.768630   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:48.805303   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.805947   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.131069   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.269163   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:49.305787   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.305910   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.625678   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.769604   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:49.809587   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.810440   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.125736   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.269191   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:50.306409   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.306739   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.625464   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.768892   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:50.805409   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.806243   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.125616   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.269034   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:51.306610   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.306959   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.625727   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.769169   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:51.806830   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.810306   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.125814   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.270051   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:52.306086   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:52.306192   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.626473   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.768916   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:52.806305   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.806665   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.125899   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.269024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:53.305645   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.307059   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.627179   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.770551   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:53.806405   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.806674   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.126024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.269166   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:54.371393   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.372173   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.625924   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.768277   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:54.806663   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.806832   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.125469   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.268594   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:55.305556   13777 kapi.go:107] duration metric: took 36.503445805s to wait for kubernetes.io/minikube-addons=registry ...
	I0910 17:30:55.313333   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.631573   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.768955   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:55.805802   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.125742   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.270140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:56.305860   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.625644   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.769297   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:56.806369   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.127588   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.270814   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:57.305110   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.625709   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.768903   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:57.805501   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:58.126627   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.269044   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:58.305193   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:58.626293   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.768712   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:58.804911   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.125828   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.269468   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:59.306105   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.625637   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.769614   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:59.807183   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:00.127716   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.270273   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:00.306165   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:00.625737   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.768998   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:00.805477   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.125499   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.269176   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:01.306304   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.626469   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.768732   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:01.805496   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.127553   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.269284   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:02.305980   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.628890   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.768835   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:02.805753   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.126003   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.268927   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:03.306626   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.626444   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.768871   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:03.805456   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.125203   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.268865   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:04.306288   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.627855   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.769364   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:04.806388   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:05.127184   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.275177   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:05.381315   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:05.625844   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.769267   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:05.805825   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.126554   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.268758   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:06.306366   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.627171   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.770092   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:06.806226   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.126711   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.269048   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:07.306150   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.625655   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.768742   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:07.806033   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.126084   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.269282   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:08.305959   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.626832   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.769318   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:08.807491   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:09.126941   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.275226   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:09.308718   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:09.626407   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.769717   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:09.813779   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.125731   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.269355   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:10.309604   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.627981   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.770045   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:10.870554   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.128226   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.268520   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:11.308019   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.626140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.769611   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:11.806272   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.126145   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.269471   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:12.306580   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.644024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.770364   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:12.807268   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.127370   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.271524   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:13.306201   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.626164   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.768629   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:13.805319   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.126256   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.604140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:14.604741   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.625880   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.769542   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:14.805015   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.129370   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.270705   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:15.306168   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.625569   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.769509   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:15.806404   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.127122   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.268486   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:16.306256   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.627609   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.768807   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:16.805284   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.126777   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:17.273904   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:17.306160   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.626219   13777 kapi.go:107] duration metric: took 58.005179225s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0910 17:31:17.769064   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:17.806337   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.269605   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:18.306821   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.768968   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:18.806084   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.269068   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:19.305883   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.768607   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:19.805388   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.269024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:20.305384   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.770422   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:20.805852   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.268928   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:21.305819   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.770149   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:21.806244   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.268897   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:22.305737   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.769883   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:22.811948   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.269476   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:23.306255   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.770445   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:23.806935   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.268635   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:24.305750   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.768424   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:24.805735   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.269370   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:25.306913   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.770284   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:25.805807   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.269063   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:26.305656   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.769396   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:26.805876   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.268241   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:27.307415   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.771452   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:27.806295   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.290195   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:28.311170   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.771373   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:28.805752   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.269499   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:29.306013   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.769982   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:29.871116   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.268936   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:30.305384   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.769209   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:30.806494   13777 kapi.go:107] duration metric: took 1m12.005153392s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0910 17:31:31.269701   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:31.769526   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:32.268540   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:32.771389   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:33.272123   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:33.769698   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:34.269894   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:34.769472   13777 kapi.go:107] duration metric: took 1m13.504330818s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0910 17:31:34.770991   13777 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-306463 cluster.
	I0910 17:31:34.772225   13777 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0910 17:31:34.773540   13777 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0910 17:31:34.774682   13777 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0910 17:31:34.775694   13777 addons.go:510] duration metric: took 1m25.210169317s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0910 17:31:34.775727   13777 start.go:246] waiting for cluster config update ...
	I0910 17:31:34.775743   13777 start.go:255] writing updated cluster config ...
	I0910 17:31:34.775953   13777 ssh_runner.go:195] Run: rm -f paused
	I0910 17:31:34.827173   13777 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 17:31:34.828957   13777 out.go:177] * Done! kubectl is now configured to use "addons-306463" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.067976294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=071454ab-143d-40b0-8897-6fda28aa6c29 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.069140498Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68f5673d-12dd-4a5d-9cfc-4e0fb5f7d48a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.070294972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990202070269947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68f5673d-12dd-4a5d-9cfc-4e0fb5f7d48a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.070821045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab70c14e-f027-447d-aac0-837f5beddcf3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.070957075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab70c14e-f027-447d-aac0-837f5beddcf3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.071242209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e14037188ca65c0e588aaf5a8ac39857e019a8ac776ce4caae64d74a2e4b08e4,PodSandboxId:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725990192830046325,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e5863717588980143d4e9ea227a1a055250dd646faf24d6fe1c739f4ef06e4,PodSandboxId:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1725990054572609769,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dcc0b067c1fe33cab5925ccf93236b0b4235680f7460bc114b84a50691c3a5,PodSandboxId:203b4e990f94eb34108df6c75f63302a21901ab5de4b10d66645bffb8a76ff0e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989466907044617,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-tddrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfd83841-db7b-49e3-9721-8b75e0cdd1c7,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4919ba67c923ab8d43533c369b87ef4ced592fbe0c0deb116fbbb857ebf533ae,PodSandboxId:409de028dc3d4ce1988813969f751c45cb1b7bec3478589a41e13f94d2bb567f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989462977955428,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp9t8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3fb5872-1ea0-4a79-a942-351d4c144608,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172598
9440357118697,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0
e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b1580
6c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725989399366385088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c
141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725989399317066221,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661
e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab70c14e-f027-447d-aac0-837f5beddcf3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.106613012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=763ea0b2-b5e3-4206-9e03-ffd251a57bee name=/runtime.v1.RuntimeService/Version
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.106701436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=763ea0b2-b5e3-4206-9e03-ffd251a57bee name=/runtime.v1.RuntimeService/Version
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.108157581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35a4677e-9ff2-4e0e-8eae-e4fd33e873c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.109308016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990202109282090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35a4677e-9ff2-4e0e-8eae-e4fd33e873c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.110011247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0da9b199-2592-4b2f-a4b6-0532bf803ab7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.110079884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0da9b199-2592-4b2f-a4b6-0532bf803ab7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.110362990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e14037188ca65c0e588aaf5a8ac39857e019a8ac776ce4caae64d74a2e4b08e4,PodSandboxId:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725990192830046325,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e5863717588980143d4e9ea227a1a055250dd646faf24d6fe1c739f4ef06e4,PodSandboxId:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1725990054572609769,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dcc0b067c1fe33cab5925ccf93236b0b4235680f7460bc114b84a50691c3a5,PodSandboxId:203b4e990f94eb34108df6c75f63302a21901ab5de4b10d66645bffb8a76ff0e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989466907044617,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-tddrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfd83841-db7b-49e3-9721-8b75e0cdd1c7,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4919ba67c923ab8d43533c369b87ef4ced592fbe0c0deb116fbbb857ebf533ae,PodSandboxId:409de028dc3d4ce1988813969f751c45cb1b7bec3478589a41e13f94d2bb567f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989462977955428,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp9t8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3fb5872-1ea0-4a79-a942-351d4c144608,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172598
9440357118697,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0
e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b1580
6c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725989399366385088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c
141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725989399317066221,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661
e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0da9b199-2592-4b2f-a4b6-0532bf803ab7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.120783535Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98583993-980c-47ea-8f46-57a444091dee name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.121124046Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-c627d,Uid:3367f866-b502-450a-b09b-d82059477fff,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990191916378388,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:43:11.603920859Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&PodSandboxMetadata{Name:nginx,Uid:7eaa2d0d-141b-494c-aa38-7e6697727bb4,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1725990052227867998,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:40:51.914165753Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d0a670d44a6654b13ef6179772139986549596d2d0ccae8ba6bb61d289cb7eb,Metadata:&PodSandboxMetadata{Name:busybox,Uid:a237baa2-0c28-439f-8fab-71565e2afef5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989495415541473,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a237baa2-0c28-439f-8fab-71565e2afef5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:31:35.101543024Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0831ebcc1f1d7c65d
9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&PodSandboxMetadata{Name:gcp-auth-89d5ffd79-9cff5,Uid:c71f9bb4-5d5d-48be-b1a6-4d832400d952,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989488283918507,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 89d5ffd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:30:21.199549340Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-q6wcq,Uid:4dc23d17-89f0-47a5-8880-0cf317f8a901,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989415643168415,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-se
rver-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:30:15.333362678Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6196330e-c966-44c2-aedd-6dc5e570c6e5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989415225549647,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"lab
els\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-10T17:30:14.606699429Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-c5qxp,Uid:5ce9784e-e567-4ff5-a7fc-cb8589c471c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989410039298540,Labels:map[string]string{io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:30:09.718625078Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&PodSandboxMetadata{Name:kube-proxy-js72f,Uid:97604350-aebe-4a6c-b687-0204de19c3f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989409644803472,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:30:09.305387588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodS
andbox{Id:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-306463,Uid:1009c91d9d6b512577ae300fa67a4ebd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989399167615087,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1009c91d9d6b512577ae300fa67a4ebd,kubernetes.io/config.seen: 2024-09-10T17:29:58.695991064Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-306463,Uid:33ef6519980e55c7294622197c7f614a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989399167130195,Labels:map[string]string{componen
t: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 33ef6519980e55c7294622197c7f614a,kubernetes.io/config.seen: 2024-09-10T17:29:58.695989801Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-306463,Uid:bfa4afb4c8677b28249b35dd2b3e2495,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989399145060473,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-ap
iserver.advertise-address.endpoint: 192.168.39.144:8443,kubernetes.io/config.hash: bfa4afb4c8677b28249b35dd2b3e2495,kubernetes.io/config.seen: 2024-09-10T17:29:58.695988647Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&PodSandboxMetadata{Name:etcd-addons-306463,Uid:2a02b0c0abfab97cfeed0b549a823c12,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989399144575099,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.144:2379,kubernetes.io/config.hash: 2a02b0c0abfab97cfeed0b549a823c12,kubernetes.io/config.seen: 2024-09-10T17:29:58.695985659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector
/interceptors.go:74" id=98583993-980c-47ea-8f46-57a444091dee name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.121658411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdbe56bc-e0da-4e05-860c-3a2c324d34c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.121738959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdbe56bc-e0da-4e05-860c-3a2c324d34c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.122059398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e14037188ca65c0e588aaf5a8ac39857e019a8ac776ce4caae64d74a2e4b08e4,PodSandboxId:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725990192830046325,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e5863717588980143d4e9ea227a1a055250dd646faf24d6fe1c739f4ef06e4,PodSandboxId:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1725990054572609769,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725989440357118697,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412
993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce56
0a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725989399366385088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725989399317066221,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdbe56bc-e0da-4e05-860c-3a2c324d34c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.152342371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab2c1127-d76a-4aa4-82d5-e6a711c38d33 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.152429187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab2c1127-d76a-4aa4-82d5-e6a711c38d33 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.153593953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ca3a491-f784-41ea-bbba-4f0c5f9867eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.154828056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990202154801918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ca3a491-f784-41ea-bbba-4f0c5f9867eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.155393958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f226b5ed-a434-4756-be99-a01faef9d9ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.155467527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f226b5ed-a434-4756-be99-a01faef9d9ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:43:22 addons-306463 crio[672]: time="2024-09-10 17:43:22.155730686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e14037188ca65c0e588aaf5a8ac39857e019a8ac776ce4caae64d74a2e4b08e4,PodSandboxId:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725990192830046325,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e5863717588980143d4e9ea227a1a055250dd646faf24d6fe1c739f4ef06e4,PodSandboxId:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1725990054572609769,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0dcc0b067c1fe33cab5925ccf93236b0b4235680f7460bc114b84a50691c3a5,PodSandboxId:203b4e990f94eb34108df6c75f63302a21901ab5de4b10d66645bffb8a76ff0e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989466907044617,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-tddrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfd83841-db7b-49e3-9721-8b75e0cdd1c7,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4919ba67c923ab8d43533c369b87ef4ced592fbe0c0deb116fbbb857ebf533ae,PodSandboxId:409de028dc3d4ce1988813969f751c45cb1b7bec3478589a41e13f94d2bb567f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725989462977955428,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp9t8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3fb5872-1ea0-4a79-a942-351d4c144608,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172598
9440357118697,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0
e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b1580
6c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725989399366385088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c
141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725989399317066221,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661
e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f226b5ed-a434-4756-be99-a01faef9d9ed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e14037188ca65       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   3cb988515a140       hello-world-app-55bf9c44b4-c627d
	45e5863717588       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   5b855cd6ea777       nginx
	582aef687e6f1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   d0831ebcc1f1d       gcp-auth-89d5ffd79-9cff5
	b0dcc0b067c1f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   203b4e990f94e       ingress-nginx-admission-patch-tddrl
	4919ba67c923a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   409de028dc3d4       ingress-nginx-admission-create-zp9t8
	9e0270fff8718       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   bf5609ea0b023       metrics-server-84c5f94fbc-q6wcq
	bc2884c8e7918       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   f3d0ecd016c61       storage-provisioner
	0a215f27453dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             13 minutes ago      Running             coredns                   0                   a8d7383a3c4c8       coredns-6f6b679f8f-c5qxp
	3a73d39390d5a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             13 minutes ago      Running             kube-proxy                0                   8987d0bb394a5       kube-proxy-js72f
	1b2fd106868bc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             13 minutes ago      Running             kube-controller-manager   0                   bff13732bced4       kube-controller-manager-addons-306463
	f698d8d7966b0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             13 minutes ago      Running             kube-scheduler            0                   3e898142a1588       kube-scheduler-addons-306463
	9820f2fa1dd2a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   636a4a297aa53       etcd-addons-306463
	a702e238565e0       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             13 minutes ago      Running             kube-apiserver            0                   bdfc49df82eed       kube-apiserver-addons-306463
	
	
	==> coredns [0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b] <==
	[INFO] 127.0.0.1:46294 - 34342 "HINFO IN 2988755105619345519.8178505747039127944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010883316s
	[INFO] 10.244.0.7:51528 - 9833 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000522483s
	[INFO] 10.244.0.7:51528 - 52590 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000287548s
	[INFO] 10.244.0.7:49547 - 11105 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000080302s
	[INFO] 10.244.0.7:49547 - 54119 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038918s
	[INFO] 10.244.0.7:51045 - 63866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058098s
	[INFO] 10.244.0.7:51045 - 57464 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006068s
	[INFO] 10.244.0.7:48884 - 49406 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072191s
	[INFO] 10.244.0.7:48884 - 18943 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044584s
	[INFO] 10.244.0.7:48605 - 63647 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000058049s
	[INFO] 10.244.0.7:48605 - 26013 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00010894s
	[INFO] 10.244.0.7:53898 - 7835 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034622s
	[INFO] 10.244.0.7:53898 - 30617 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003803s
	[INFO] 10.244.0.7:41577 - 5082 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072855s
	[INFO] 10.244.0.7:41577 - 14808 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000127251s
	[INFO] 10.244.0.7:35153 - 44630 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000117348s
	[INFO] 10.244.0.7:35153 - 21591 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061476s
	[INFO] 10.244.0.22:53652 - 52736 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000525847s
	[INFO] 10.244.0.22:51909 - 33747 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000080647s
	[INFO] 10.244.0.22:59992 - 15038 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160421s
	[INFO] 10.244.0.22:50214 - 27016 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000071597s
	[INFO] 10.244.0.22:58245 - 14301 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127195s
	[INFO] 10.244.0.22:46404 - 10714 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079794s
	[INFO] 10.244.0.22:37437 - 16123 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001244875s
	[INFO] 10.244.0.22:55509 - 30140 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001661686s
	
	
	==> describe nodes <==
	Name:               addons-306463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-306463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=addons-306463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_30_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-306463
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:30:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-306463
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:43:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:41:07 +0000   Tue, 10 Sep 2024 17:30:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:41:07 +0000   Tue, 10 Sep 2024 17:30:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:41:07 +0000   Tue, 10 Sep 2024 17:30:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:41:07 +0000   Tue, 10 Sep 2024 17:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    addons-306463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd3fd5b0d8a84e1595be7f0c7913d0fd
	  System UUID:                dd3fd5b0-d8a8-4e15-95be-7f0c7913d0fd
	  Boot ID:                    41ce101e-c89c-4773-988f-9e0f2e4ee815
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-c627d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gcp-auth                    gcp-auth-89d5ffd79-9cff5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-c5qxp                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-306463                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-306463             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-306463    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-js72f                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-306463             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-q6wcq          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-306463 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-306463 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-306463 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-306463 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-306463 event: Registered Node addons-306463 in Controller
	
	
	==> dmesg <==
	[  +5.181145] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.627457] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.626250] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:31] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.055474] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.112841] kauditd_printk_skb: 31 callbacks suppressed
	[ +13.239386] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.062688] kauditd_printk_skb: 49 callbacks suppressed
	[  +9.206322] kauditd_printk_skb: 9 callbacks suppressed
	[Sep10 17:32] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:34] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:39] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.622351] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.013540] kauditd_printk_skb: 39 callbacks suppressed
	[Sep10 17:40] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.890154] kauditd_printk_skb: 20 callbacks suppressed
	[ +15.600296] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.244502] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.735054] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.942009] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.692638] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.461030] kauditd_printk_skb: 36 callbacks suppressed
	[Sep10 17:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.497989] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c] <==
	{"level":"warn","ts":"2024-09-10T17:30:45.866761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.415869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:30:45.866856Z","caller":"traceutil/trace.go:171","msg":"trace[1068047252] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:927; }","duration":"110.525419ms","start":"2024-09-10T17:30:45.756319Z","end":"2024-09-10T17:30:45.866845Z","steps":["trace[1068047252] 'range keys from in-memory index tree'  (duration: 110.294ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:30:45.867044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.860548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-10T17:30:45.867096Z","caller":"traceutil/trace.go:171","msg":"trace[1809753364] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:927; }","duration":"102.917852ms","start":"2024-09-10T17:30:45.764169Z","end":"2024-09-10T17:30:45.867087Z","steps":["trace[1809753364] 'range keys from in-memory index tree'  (duration: 102.76827ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:30:52.059608Z","caller":"traceutil/trace.go:171","msg":"trace[1298428410] transaction","detail":"{read_only:false; response_revision:937; number_of_response:1; }","duration":"112.30505ms","start":"2024-09-10T17:30:51.947283Z","end":"2024-09-10T17:30:52.059589Z","steps":["trace[1298428410] 'process raft request'  (duration: 112.162762ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:30:53.611772Z","caller":"traceutil/trace.go:171","msg":"trace[1258985527] linearizableReadLoop","detail":"{readStateIndex:964; appliedIndex:963; }","duration":"178.702934ms","start":"2024-09-10T17:30:53.433055Z","end":"2024-09-10T17:30:53.611758Z","steps":["trace[1258985527] 'read index received'  (duration: 178.578616ms)","trace[1258985527] 'applied index is now lower than readState.Index'  (duration: 123.822µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T17:30:53.611866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.792771ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:30:53.611943Z","caller":"traceutil/trace.go:171","msg":"trace[1456752792] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:938; }","duration":"178.885873ms","start":"2024-09-10T17:30:53.433052Z","end":"2024-09-10T17:30:53.611937Z","steps":["trace[1456752792] 'agreement among raft nodes before linearized reading'  (duration: 178.78055ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:31:14.585857Z","caller":"traceutil/trace.go:171","msg":"trace[1736615180] linearizableReadLoop","detail":"{readStateIndex:1118; appliedIndex:1117; }","duration":"331.383713ms","start":"2024-09-10T17:31:14.254456Z","end":"2024-09-10T17:31:14.585840Z","steps":["trace[1736615180] 'read index received'  (duration: 331.171762ms)","trace[1736615180] 'applied index is now lower than readState.Index'  (duration: 211.53µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T17:31:14.585995Z","caller":"traceutil/trace.go:171","msg":"trace[616425486] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"377.322054ms","start":"2024-09-10T17:31:14.208667Z","end":"2024-09-10T17:31:14.585989Z","steps":["trace[616425486] 'process raft request'  (duration: 377.062724ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.586082Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:31:14.208652Z","time spent":"377.361583ms","remote":"127.0.0.1:39804","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1074 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-10T17:31:14.586243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.149848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:14.586321Z","caller":"traceutil/trace.go:171","msg":"trace[1902664727] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1087; }","duration":"295.232349ms","start":"2024-09-10T17:31:14.291079Z","end":"2024-09-10T17:31:14.586312Z","steps":["trace[1902664727] 'agreement among raft nodes before linearized reading'  (duration: 295.125426ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.586354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.675068ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:14.586388Z","caller":"traceutil/trace.go:171","msg":"trace[681843452] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1087; }","duration":"153.706065ms","start":"2024-09-10T17:31:14.432677Z","end":"2024-09-10T17:31:14.586383Z","steps":["trace[681843452] 'agreement among raft nodes before linearized reading'  (duration: 153.67064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.586334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.898535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:14.587217Z","caller":"traceutil/trace.go:171","msg":"trace[59462955] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1087; }","duration":"332.778889ms","start":"2024-09-10T17:31:14.254428Z","end":"2024-09-10T17:31:14.587207Z","steps":["trace[59462955] 'agreement among raft nodes before linearized reading'  (duration: 331.885636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.587550Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:31:14.254397Z","time spent":"333.142093ms","remote":"127.0.0.1:39820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-10T17:31:25.693826Z","caller":"traceutil/trace.go:171","msg":"trace[916338974] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"175.709853ms","start":"2024-09-10T17:31:25.518097Z","end":"2024-09-10T17:31:25.693806Z","steps":["trace[916338974] 'process raft request'  (duration: 175.242522ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:31:28.273694Z","caller":"traceutil/trace.go:171","msg":"trace[326156197] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"145.50673ms","start":"2024-09-10T17:31:28.128165Z","end":"2024-09-10T17:31:28.273671Z","steps":["trace[326156197] 'process raft request'  (duration: 145.173512ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:31:33.252659Z","caller":"traceutil/trace.go:171","msg":"trace[803236101] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"168.076485ms","start":"2024-09-10T17:31:33.084566Z","end":"2024-09-10T17:31:33.252643Z","steps":["trace[803236101] 'process raft request'  (duration: 167.526703ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:39:54.247760Z","caller":"traceutil/trace.go:171","msg":"trace[1408959823] transaction","detail":"{read_only:false; response_revision:2000; number_of_response:1; }","duration":"120.176741ms","start":"2024-09-10T17:39:54.127561Z","end":"2024-09-10T17:39:54.247737Z","steps":["trace[1408959823] 'process raft request'  (duration: 120.058138ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:40:00.350559Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1527}
	{"level":"info","ts":"2024-09-10T17:40:00.394567Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1527,"took":"43.479842ms","hash":4077854701,"current-db-size-bytes":6705152,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3575808,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-10T17:40:00.394619Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4077854701,"revision":1527,"compact-revision":-1}
	
	
	==> gcp-auth [582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf] <==
	2024/09/10 17:31:35 Ready to write response ...
	2024/09/10 17:39:48 Ready to marshal response ...
	2024/09/10 17:39:48 Ready to write response ...
	2024/09/10 17:39:48 Ready to marshal response ...
	2024/09/10 17:39:48 Ready to write response ...
	2024/09/10 17:39:54 Ready to marshal response ...
	2024/09/10 17:39:54 Ready to write response ...
	2024/09/10 17:39:59 Ready to marshal response ...
	2024/09/10 17:39:59 Ready to write response ...
	2024/09/10 17:39:59 Ready to marshal response ...
	2024/09/10 17:39:59 Ready to write response ...
	2024/09/10 17:40:08 Ready to marshal response ...
	2024/09/10 17:40:08 Ready to write response ...
	2024/09/10 17:40:19 Ready to marshal response ...
	2024/09/10 17:40:19 Ready to write response ...
	2024/09/10 17:40:51 Ready to marshal response ...
	2024/09/10 17:40:51 Ready to write response ...
	2024/09/10 17:40:55 Ready to marshal response ...
	2024/09/10 17:40:55 Ready to write response ...
	2024/09/10 17:40:55 Ready to marshal response ...
	2024/09/10 17:40:55 Ready to write response ...
	2024/09/10 17:40:55 Ready to marshal response ...
	2024/09/10 17:40:55 Ready to write response ...
	2024/09/10 17:43:11 Ready to marshal response ...
	2024/09/10 17:43:11 Ready to write response ...
	
	
	==> kernel <==
	 17:43:22 up 13 min,  0 users,  load average: 0.57, 0.49, 0.42
	Linux addons-306463 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b] <==
	W0910 17:39:45.189265       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0910 17:40:02.002508       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0910 17:40:09.586498       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:09.594344       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:09.601249       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:24.601459       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0910 17:40:34.932458       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:34.932521       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:40:34.975423       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:34.975477       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:40:34.992396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:34.992451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:40:35.118983       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:35.119164       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0910 17:40:36.120579       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0910 17:40:36.126608       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0910 17:40:37.403724       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:38.410312       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0910 17:40:51.772832       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0910 17:40:51.949953       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.46.35"}
	I0910 17:40:55.053150       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.16.137"}
	I0910 17:43:11.755281       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.191.186"}
	E0910 17:43:14.307458       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0910 17:43:16.969424       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0910 17:43:16.975329       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54] <==
	W0910 17:41:44.819968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:44.820090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:03.491094       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:03.491229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:07.461494       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:07.461541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:35.935480       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:35.935671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:36.913199       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:36.913323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:49.588630       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:49.588834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:49.681606       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:49.681657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:43:11.601379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.390786ms"
	I0910 17:43:11.617081       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.524646ms"
	I0910 17:43:11.617271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="90.501µs"
	I0910 17:43:11.622234       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="25.953µs"
	I0910 17:43:13.613821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="19.086046ms"
	I0910 17:43:13.614781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="78.473µs"
	I0910 17:43:14.208753       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0910 17:43:14.220397       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.563µs"
	I0910 17:43:14.228555       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0910 17:43:17.234131       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:17.234259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:30:10.959254       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:30:10.977328       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.144"]
	E0910 17:30:10.977427       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:30:11.055345       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:30:11.055408       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:30:11.055442       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:30:11.058990       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:30:11.059418       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:30:11.059455       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:30:11.061020       1 config.go:197] "Starting service config controller"
	I0910 17:30:11.061045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:30:11.061068       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:30:11.061072       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:30:11.061523       1 config.go:326] "Starting node config controller"
	I0910 17:30:11.061530       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:30:11.161679       1 shared_informer.go:320] Caches are synced for node config
	I0910 17:30:11.161709       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:30:11.161736       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495] <==
	W0910 17:30:01.806509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:30:01.806539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.806590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:30:01.806622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.806670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 17:30:01.806700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.806755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:01.806784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.810146       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 17:30:01.811967       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 17:30:02.656866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 17:30:02.656998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:02.852652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:30:02.852741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:02.914536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 17:30:02.914590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:02.973206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 17:30:02.973257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:03.010457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:30:03.010597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:03.040102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:03.040268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:03.048988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 17:30:03.049072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 17:30:03.383329       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 17:43:12 addons-306463 kubelet[1220]: I0910 17:43:12.945143    1220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9jdw\" (UniqueName: \"kubernetes.io/projected/33998c91-0157-46f1-aa90-c6001166fff3-kube-api-access-b9jdw\") pod \"33998c91-0157-46f1-aa90-c6001166fff3\" (UID: \"33998c91-0157-46f1-aa90-c6001166fff3\") "
	Sep 10 17:43:12 addons-306463 kubelet[1220]: I0910 17:43:12.948724    1220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33998c91-0157-46f1-aa90-c6001166fff3-kube-api-access-b9jdw" (OuterVolumeSpecName: "kube-api-access-b9jdw") pod "33998c91-0157-46f1-aa90-c6001166fff3" (UID: "33998c91-0157-46f1-aa90-c6001166fff3"). InnerVolumeSpecName "kube-api-access-b9jdw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:43:13 addons-306463 kubelet[1220]: I0910 17:43:13.045412    1220 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-b9jdw\" (UniqueName: \"kubernetes.io/projected/33998c91-0157-46f1-aa90-c6001166fff3-kube-api-access-b9jdw\") on node \"addons-306463\" DevicePath \"\""
	Sep 10 17:43:13 addons-306463 kubelet[1220]: I0910 17:43:13.560708    1220 scope.go:117] "RemoveContainer" containerID="c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794"
	Sep 10 17:43:13 addons-306463 kubelet[1220]: I0910 17:43:13.594589    1220 scope.go:117] "RemoveContainer" containerID="c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794"
	Sep 10 17:43:13 addons-306463 kubelet[1220]: E0910 17:43:13.600961    1220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794\": container with ID starting with c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794 not found: ID does not exist" containerID="c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794"
	Sep 10 17:43:13 addons-306463 kubelet[1220]: I0910 17:43:13.601046    1220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794"} err="failed to get container status \"c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794\": rpc error: code = NotFound desc = could not find container \"c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794\": container with ID starting with c3927bb112d8eef7839083f69c6d04d29e13d523f980de943445625e77252794 not found: ID does not exist"
	Sep 10 17:43:13 addons-306463 kubelet[1220]: I0910 17:43:13.621066    1220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-c627d" podStartSLOduration=1.986860279 podStartE2EDuration="2.621048591s" podCreationTimestamp="2024-09-10 17:43:11 +0000 UTC" firstStartedPulling="2024-09-10 17:43:12.172097264 +0000 UTC m=+788.193796411" lastFinishedPulling="2024-09-10 17:43:12.806285573 +0000 UTC m=+788.827984723" observedRunningTime="2024-09-10 17:43:13.597118724 +0000 UTC m=+789.618817891" watchObservedRunningTime="2024-09-10 17:43:13.621048591 +0000 UTC m=+789.642747753"
	Sep 10 17:43:14 addons-306463 kubelet[1220]: I0910 17:43:14.123144    1220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33998c91-0157-46f1-aa90-c6001166fff3" path="/var/lib/kubelet/pods/33998c91-0157-46f1-aa90-c6001166fff3/volumes"
	Sep 10 17:43:14 addons-306463 kubelet[1220]: E0910 17:43:14.459519    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990194459233902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:43:14 addons-306463 kubelet[1220]: E0910 17:43:14.459574    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990194459233902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:43:16 addons-306463 kubelet[1220]: I0910 17:43:16.123493    1220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3fb5872-1ea0-4a79-a942-351d4c144608" path="/var/lib/kubelet/pods/a3fb5872-1ea0-4a79-a942-351d4c144608/volumes"
	Sep 10 17:43:16 addons-306463 kubelet[1220]: I0910 17:43:16.124689    1220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfd83841-db7b-49e3-9721-8b75e0cdd1c7" path="/var/lib/kubelet/pods/bfd83841-db7b-49e3-9721-8b75e0cdd1c7/volumes"
	Sep 10 17:43:17 addons-306463 kubelet[1220]: I0910 17:43:17.481501    1220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63d978ce-6789-493d-a46f-de2712ba51dd-webhook-cert\") pod \"63d978ce-6789-493d-a46f-de2712ba51dd\" (UID: \"63d978ce-6789-493d-a46f-de2712ba51dd\") "
	Sep 10 17:43:17 addons-306463 kubelet[1220]: I0910 17:43:17.481557    1220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df6mf\" (UniqueName: \"kubernetes.io/projected/63d978ce-6789-493d-a46f-de2712ba51dd-kube-api-access-df6mf\") pod \"63d978ce-6789-493d-a46f-de2712ba51dd\" (UID: \"63d978ce-6789-493d-a46f-de2712ba51dd\") "
	Sep 10 17:43:17 addons-306463 kubelet[1220]: I0910 17:43:17.486112    1220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d978ce-6789-493d-a46f-de2712ba51dd-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "63d978ce-6789-493d-a46f-de2712ba51dd" (UID: "63d978ce-6789-493d-a46f-de2712ba51dd"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 10 17:43:17 addons-306463 kubelet[1220]: I0910 17:43:17.486579    1220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d978ce-6789-493d-a46f-de2712ba51dd-kube-api-access-df6mf" (OuterVolumeSpecName: "kube-api-access-df6mf") pod "63d978ce-6789-493d-a46f-de2712ba51dd" (UID: "63d978ce-6789-493d-a46f-de2712ba51dd"). InnerVolumeSpecName "kube-api-access-df6mf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:43:17 addons-306463 kubelet[1220]: I0910 17:43:17.581725    1220 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-df6mf\" (UniqueName: \"kubernetes.io/projected/63d978ce-6789-493d-a46f-de2712ba51dd-kube-api-access-df6mf\") on node \"addons-306463\" DevicePath \"\""
	Sep 10 17:43:17 addons-306463 kubelet[1220]: I0910 17:43:17.581773    1220 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63d978ce-6789-493d-a46f-de2712ba51dd-webhook-cert\") on node \"addons-306463\" DevicePath \"\""
	Sep 10 17:43:17 addons-306463 kubelet[1220]: I0910 17:43:17.590603    1220 scope.go:117] "RemoveContainer" containerID="8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e"
	Sep 10 17:43:17 addons-306463 kubelet[1220]: I0910 17:43:17.613441    1220 scope.go:117] "RemoveContainer" containerID="8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e"
	Sep 10 17:43:17 addons-306463 kubelet[1220]: E0910 17:43:17.613866    1220 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e\": container with ID starting with 8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e not found: ID does not exist" containerID="8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e"
	Sep 10 17:43:17 addons-306463 kubelet[1220]: I0910 17:43:17.613950    1220 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e"} err="failed to get container status \"8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e\": rpc error: code = NotFound desc = could not find container \"8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e\": container with ID starting with 8d4dc66b78cf362fa98ace3d25b3d6d54fb9efed385b4e766904f13e18ce020e not found: ID does not exist"
	Sep 10 17:43:18 addons-306463 kubelet[1220]: I0910 17:43:18.124376    1220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d978ce-6789-493d-a46f-de2712ba51dd" path="/var/lib/kubelet/pods/63d978ce-6789-493d-a46f-de2712ba51dd/volumes"
	Sep 10 17:43:21 addons-306463 kubelet[1220]: E0910 17:43:21.120570    1220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a237baa2-0c28-439f-8fab-71565e2afef5"
	
	
	==> storage-provisioner [bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf] <==
	I0910 17:30:16.804855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:30:16.824584       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:30:16.824662       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 17:30:16.842816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 17:30:16.866442       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-306463_b8efc0b6-d765-4a34-ad76-23431d9ccb05!
	I0910 17:30:16.866012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"337c439a-f46b-493b-9e06-ad4421b197f3", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-306463_b8efc0b6-d765-4a34-ad76-23431d9ccb05 became leader
	I0910 17:30:16.971090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-306463_b8efc0b6-d765-4a34-ad76-23431d9ccb05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-306463 -n addons-306463
helpers_test.go:261: (dbg) Run:  kubectl --context addons-306463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-306463 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-306463 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-306463/192.168.39.144
	Start Time:       Tue, 10 Sep 2024 17:31:35 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7msjq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7msjq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/busybox to addons-306463
	  Normal   Pulling    10m (x4 over 11m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)   kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    96s (x43 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (321.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.231571ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-q6wcq" [4dc23d17-89f0-47a5-8880-0cf317f8a901] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003645874s
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (66.376576ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 9m34.752698691s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (63.870492ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 9m36.98098168s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (69.620557ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 9m41.911594388s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (66.025988ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 9m48.245299571s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (62.481361ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 9m58.084219608s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (65.118854ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 10m9.529562157s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (64.932929ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 10m31.227569925s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (61.336825ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 11m4.062577438s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (64.321206ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 12m2.159140195s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (68.558273ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 12m48.665550977s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (59.126712ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 13m20.91881656s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-306463 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-306463 top pods -n kube-system: exit status 1 (60.034923ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c5qxp, age: 14m47.606998096s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-306463 -n addons-306463
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-306463 logs -n 25: (1.333533566s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-545922                                                                     | download-only-545922 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-355146                                                                     | download-only-355146 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-896642 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | binary-mirror-896642                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42249                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-896642                                                                     | binary-mirror-896642 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-306463 --wait=true                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:39 UTC | 10 Sep 24 17:39 UTC |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:39 UTC | 10 Sep 24 17:39 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-306463 ssh cat                                                                       | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | /opt/local-path-provisioner/pvc-2be347cf-fef5-49ec-8123-8e3cf02ab859_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-306463 addons                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-306463 addons                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-306463 ip                                                                            | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | -p addons-306463                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | -p addons-306463                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:40 UTC | 10 Sep 24 17:40 UTC |
	|         | addons-306463                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-306463 ssh curl -s                                                                   | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:41 UTC | 10 Sep 24 17:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-306463 ip                                                                            | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-306463 addons disable                                                                | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-306463 addons                                                                        | addons-306463        | jenkins | v1.34.0 | 10 Sep 24 17:44 UTC | 10 Sep 24 17:44 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:22.682209   13777 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:22.682460   13777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:22.682468   13777 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:22.682472   13777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:22.682675   13777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:29:22.683208   13777 out.go:352] Setting JSON to false
	I0910 17:29:22.683958   13777 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":715,"bootTime":1725988648,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:22.684008   13777 start.go:139] virtualization: kvm guest
	I0910 17:29:22.685971   13777 out.go:177] * [addons-306463] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:29:22.687151   13777 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:29:22.687158   13777 notify.go:220] Checking for updates...
	I0910 17:29:22.689304   13777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:22.690364   13777 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:29:22.691502   13777 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:22.692665   13777 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:29:22.693954   13777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:29:22.695291   13777 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:22.725551   13777 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 17:29:22.726685   13777 start.go:297] selected driver: kvm2
	I0910 17:29:22.726698   13777 start.go:901] validating driver "kvm2" against <nil>
	I0910 17:29:22.726711   13777 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:29:22.727613   13777 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:22.727695   13777 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:29:22.741833   13777 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:29:22.741873   13777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:22.742090   13777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:29:22.742162   13777 cni.go:84] Creating CNI manager for ""
	I0910 17:29:22.742176   13777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:29:22.742187   13777 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:29:22.742259   13777 start.go:340] cluster config:
	{Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:22.742373   13777 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:22.744027   13777 out.go:177] * Starting "addons-306463" primary control-plane node in "addons-306463" cluster
	I0910 17:29:22.745131   13777 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:29:22.745164   13777 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:29:22.745174   13777 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:22.745247   13777 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:29:22.745259   13777 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:29:22.745636   13777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/config.json ...
	I0910 17:29:22.745666   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/config.json: {Name:mka38f023b13d99d139d0b4b4731421fa1c9c222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:22.745821   13777 start.go:360] acquireMachinesLock for addons-306463: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:29:22.745879   13777 start.go:364] duration metric: took 40.358µs to acquireMachinesLock for "addons-306463"
	I0910 17:29:22.745902   13777 start.go:93] Provisioning new machine with config: &{Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:29:22.745979   13777 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 17:29:22.747590   13777 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0910 17:29:22.747699   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:29:22.747737   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:29:22.761242   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0910 17:29:22.761623   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:29:22.762084   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:29:22.762105   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:29:22.762416   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:29:22.762596   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:22.762723   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:22.762855   13777 start.go:159] libmachine.API.Create for "addons-306463" (driver="kvm2")
	I0910 17:29:22.762901   13777 client.go:168] LocalClient.Create starting
	I0910 17:29:22.762931   13777 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 17:29:22.824214   13777 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 17:29:23.021609   13777 main.go:141] libmachine: Running pre-create checks...
	I0910 17:29:23.021632   13777 main.go:141] libmachine: (addons-306463) Calling .PreCreateCheck
	I0910 17:29:23.022141   13777 main.go:141] libmachine: (addons-306463) Calling .GetConfigRaw
	I0910 17:29:23.022504   13777 main.go:141] libmachine: Creating machine...
	I0910 17:29:23.022515   13777 main.go:141] libmachine: (addons-306463) Calling .Create
	I0910 17:29:23.022671   13777 main.go:141] libmachine: (addons-306463) Creating KVM machine...
	I0910 17:29:23.023879   13777 main.go:141] libmachine: (addons-306463) DBG | found existing default KVM network
	I0910 17:29:23.024609   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.024461   13799 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0910 17:29:23.024628   13777 main.go:141] libmachine: (addons-306463) DBG | created network xml: 
	I0910 17:29:23.024641   13777 main.go:141] libmachine: (addons-306463) DBG | <network>
	I0910 17:29:23.024649   13777 main.go:141] libmachine: (addons-306463) DBG |   <name>mk-addons-306463</name>
	I0910 17:29:23.024662   13777 main.go:141] libmachine: (addons-306463) DBG |   <dns enable='no'/>
	I0910 17:29:23.024669   13777 main.go:141] libmachine: (addons-306463) DBG |   
	I0910 17:29:23.024682   13777 main.go:141] libmachine: (addons-306463) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0910 17:29:23.024693   13777 main.go:141] libmachine: (addons-306463) DBG |     <dhcp>
	I0910 17:29:23.024763   13777 main.go:141] libmachine: (addons-306463) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0910 17:29:23.024789   13777 main.go:141] libmachine: (addons-306463) DBG |     </dhcp>
	I0910 17:29:23.024803   13777 main.go:141] libmachine: (addons-306463) DBG |   </ip>
	I0910 17:29:23.024817   13777 main.go:141] libmachine: (addons-306463) DBG |   
	I0910 17:29:23.024828   13777 main.go:141] libmachine: (addons-306463) DBG | </network>
	I0910 17:29:23.024838   13777 main.go:141] libmachine: (addons-306463) DBG | 
	I0910 17:29:23.029807   13777 main.go:141] libmachine: (addons-306463) DBG | trying to create private KVM network mk-addons-306463 192.168.39.0/24...
	I0910 17:29:23.091118   13777 main.go:141] libmachine: (addons-306463) DBG | private KVM network mk-addons-306463 192.168.39.0/24 created
	I0910 17:29:23.091150   13777 main.go:141] libmachine: (addons-306463) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463 ...
	I0910 17:29:23.091164   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.091073   13799 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:23.091178   13777 main.go:141] libmachine: (addons-306463) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:29:23.091208   13777 main.go:141] libmachine: (addons-306463) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:29:23.339080   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.338953   13799 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa...
	I0910 17:29:23.548665   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.548540   13799 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/addons-306463.rawdisk...
	I0910 17:29:23.548703   13777 main.go:141] libmachine: (addons-306463) DBG | Writing magic tar header
	I0910 17:29:23.548717   13777 main.go:141] libmachine: (addons-306463) DBG | Writing SSH key tar header
	I0910 17:29:23.548730   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:23.548675   13799 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463 ...
	I0910 17:29:23.548788   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463
	I0910 17:29:23.548813   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463 (perms=drwx------)
	I0910 17:29:23.548826   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 17:29:23.548840   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:23.548846   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 17:29:23.548863   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:29:23.548876   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:29:23.548888   13777 main.go:141] libmachine: (addons-306463) DBG | Checking permissions on dir: /home
	I0910 17:29:23.548904   13777 main.go:141] libmachine: (addons-306463) DBG | Skipping /home - not owner
	I0910 17:29:23.548918   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:29:23.548931   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 17:29:23.548942   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 17:29:23.548949   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:29:23.548957   13777 main.go:141] libmachine: (addons-306463) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:29:23.548963   13777 main.go:141] libmachine: (addons-306463) Creating domain...
	I0910 17:29:23.549957   13777 main.go:141] libmachine: (addons-306463) define libvirt domain using xml: 
	I0910 17:29:23.549976   13777 main.go:141] libmachine: (addons-306463) <domain type='kvm'>
	I0910 17:29:23.549984   13777 main.go:141] libmachine: (addons-306463)   <name>addons-306463</name>
	I0910 17:29:23.549995   13777 main.go:141] libmachine: (addons-306463)   <memory unit='MiB'>4000</memory>
	I0910 17:29:23.550004   13777 main.go:141] libmachine: (addons-306463)   <vcpu>2</vcpu>
	I0910 17:29:23.550011   13777 main.go:141] libmachine: (addons-306463)   <features>
	I0910 17:29:23.550016   13777 main.go:141] libmachine: (addons-306463)     <acpi/>
	I0910 17:29:23.550023   13777 main.go:141] libmachine: (addons-306463)     <apic/>
	I0910 17:29:23.550027   13777 main.go:141] libmachine: (addons-306463)     <pae/>
	I0910 17:29:23.550031   13777 main.go:141] libmachine: (addons-306463)     
	I0910 17:29:23.550036   13777 main.go:141] libmachine: (addons-306463)   </features>
	I0910 17:29:23.550043   13777 main.go:141] libmachine: (addons-306463)   <cpu mode='host-passthrough'>
	I0910 17:29:23.550050   13777 main.go:141] libmachine: (addons-306463)   
	I0910 17:29:23.550064   13777 main.go:141] libmachine: (addons-306463)   </cpu>
	I0910 17:29:23.550074   13777 main.go:141] libmachine: (addons-306463)   <os>
	I0910 17:29:23.550087   13777 main.go:141] libmachine: (addons-306463)     <type>hvm</type>
	I0910 17:29:23.550095   13777 main.go:141] libmachine: (addons-306463)     <boot dev='cdrom'/>
	I0910 17:29:23.550103   13777 main.go:141] libmachine: (addons-306463)     <boot dev='hd'/>
	I0910 17:29:23.550108   13777 main.go:141] libmachine: (addons-306463)     <bootmenu enable='no'/>
	I0910 17:29:23.550121   13777 main.go:141] libmachine: (addons-306463)   </os>
	I0910 17:29:23.550139   13777 main.go:141] libmachine: (addons-306463)   <devices>
	I0910 17:29:23.550156   13777 main.go:141] libmachine: (addons-306463)     <disk type='file' device='cdrom'>
	I0910 17:29:23.550170   13777 main.go:141] libmachine: (addons-306463)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/boot2docker.iso'/>
	I0910 17:29:23.550179   13777 main.go:141] libmachine: (addons-306463)       <target dev='hdc' bus='scsi'/>
	I0910 17:29:23.550185   13777 main.go:141] libmachine: (addons-306463)       <readonly/>
	I0910 17:29:23.550191   13777 main.go:141] libmachine: (addons-306463)     </disk>
	I0910 17:29:23.550198   13777 main.go:141] libmachine: (addons-306463)     <disk type='file' device='disk'>
	I0910 17:29:23.550206   13777 main.go:141] libmachine: (addons-306463)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:29:23.550221   13777 main.go:141] libmachine: (addons-306463)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/addons-306463.rawdisk'/>
	I0910 17:29:23.550239   13777 main.go:141] libmachine: (addons-306463)       <target dev='hda' bus='virtio'/>
	I0910 17:29:23.550246   13777 main.go:141] libmachine: (addons-306463)     </disk>
	I0910 17:29:23.550252   13777 main.go:141] libmachine: (addons-306463)     <interface type='network'>
	I0910 17:29:23.550256   13777 main.go:141] libmachine: (addons-306463)       <source network='mk-addons-306463'/>
	I0910 17:29:23.550262   13777 main.go:141] libmachine: (addons-306463)       <model type='virtio'/>
	I0910 17:29:23.550268   13777 main.go:141] libmachine: (addons-306463)     </interface>
	I0910 17:29:23.550274   13777 main.go:141] libmachine: (addons-306463)     <interface type='network'>
	I0910 17:29:23.550285   13777 main.go:141] libmachine: (addons-306463)       <source network='default'/>
	I0910 17:29:23.550301   13777 main.go:141] libmachine: (addons-306463)       <model type='virtio'/>
	I0910 17:29:23.550316   13777 main.go:141] libmachine: (addons-306463)     </interface>
	I0910 17:29:23.550326   13777 main.go:141] libmachine: (addons-306463)     <serial type='pty'>
	I0910 17:29:23.550334   13777 main.go:141] libmachine: (addons-306463)       <target port='0'/>
	I0910 17:29:23.550339   13777 main.go:141] libmachine: (addons-306463)     </serial>
	I0910 17:29:23.550346   13777 main.go:141] libmachine: (addons-306463)     <console type='pty'>
	I0910 17:29:23.550352   13777 main.go:141] libmachine: (addons-306463)       <target type='serial' port='0'/>
	I0910 17:29:23.550358   13777 main.go:141] libmachine: (addons-306463)     </console>
	I0910 17:29:23.550364   13777 main.go:141] libmachine: (addons-306463)     <rng model='virtio'>
	I0910 17:29:23.550371   13777 main.go:141] libmachine: (addons-306463)       <backend model='random'>/dev/random</backend>
	I0910 17:29:23.550377   13777 main.go:141] libmachine: (addons-306463)     </rng>
	I0910 17:29:23.550386   13777 main.go:141] libmachine: (addons-306463)     
	I0910 17:29:23.550422   13777 main.go:141] libmachine: (addons-306463)     
	I0910 17:29:23.550446   13777 main.go:141] libmachine: (addons-306463)   </devices>
	I0910 17:29:23.550457   13777 main.go:141] libmachine: (addons-306463) </domain>
	I0910 17:29:23.550464   13777 main.go:141] libmachine: (addons-306463) 
	I0910 17:29:23.555556   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:8a:bf:af in network default
	I0910 17:29:23.556041   13777 main.go:141] libmachine: (addons-306463) Ensuring networks are active...
	I0910 17:29:23.556059   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:23.556675   13777 main.go:141] libmachine: (addons-306463) Ensuring network default is active
	I0910 17:29:23.556973   13777 main.go:141] libmachine: (addons-306463) Ensuring network mk-addons-306463 is active
	I0910 17:29:23.557522   13777 main.go:141] libmachine: (addons-306463) Getting domain xml...
	I0910 17:29:23.558190   13777 main.go:141] libmachine: (addons-306463) Creating domain...
	I0910 17:29:24.925718   13777 main.go:141] libmachine: (addons-306463) Waiting to get IP...
	I0910 17:29:24.926478   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:24.926843   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:24.926877   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:24.926829   13799 retry.go:31] will retry after 244.328706ms: waiting for machine to come up
	I0910 17:29:25.173225   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:25.173645   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:25.173677   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:25.173618   13799 retry.go:31] will retry after 349.863232ms: waiting for machine to come up
	I0910 17:29:25.525116   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:25.525527   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:25.525551   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:25.525492   13799 retry.go:31] will retry after 354.701071ms: waiting for machine to come up
	I0910 17:29:25.881916   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:25.882328   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:25.882350   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:25.882291   13799 retry.go:31] will retry after 411.881959ms: waiting for machine to come up
	I0910 17:29:26.296034   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:26.296469   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:26.296495   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:26.296414   13799 retry.go:31] will retry after 565.67781ms: waiting for machine to come up
	I0910 17:29:26.864221   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:26.864646   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:26.864669   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:26.864638   13799 retry.go:31] will retry after 573.622911ms: waiting for machine to come up
	I0910 17:29:27.439318   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:27.439758   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:27.439778   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:27.439737   13799 retry.go:31] will retry after 813.476344ms: waiting for machine to come up
	I0910 17:29:28.254405   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:28.254862   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:28.254883   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:28.254830   13799 retry.go:31] will retry after 1.15953408s: waiting for machine to come up
	I0910 17:29:29.416144   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:29.416582   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:29.416605   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:29.416548   13799 retry.go:31] will retry after 1.708147643s: waiting for machine to come up
	I0910 17:29:31.127436   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:31.127806   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:31.127832   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:31.127765   13799 retry.go:31] will retry after 2.290831953s: waiting for machine to come up
	I0910 17:29:33.419747   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:33.420078   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:33.420121   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:33.420025   13799 retry.go:31] will retry after 2.583428608s: waiting for machine to come up
	I0910 17:29:36.006176   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:36.006651   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:36.006676   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:36.006622   13799 retry.go:31] will retry after 2.503171234s: waiting for machine to come up
	I0910 17:29:38.511747   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:38.512087   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:38.512126   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:38.512062   13799 retry.go:31] will retry after 3.047981844s: waiting for machine to come up
	I0910 17:29:41.561167   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:41.561635   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find current IP address of domain addons-306463 in network mk-addons-306463
	I0910 17:29:41.561661   13777 main.go:141] libmachine: (addons-306463) DBG | I0910 17:29:41.561592   13799 retry.go:31] will retry after 5.416767796s: waiting for machine to come up
	I0910 17:29:46.982824   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:46.983201   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has current primary IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:46.983221   13777 main.go:141] libmachine: (addons-306463) Found IP for machine: 192.168.39.144
	I0910 17:29:46.983236   13777 main.go:141] libmachine: (addons-306463) Reserving static IP address...
	I0910 17:29:46.983568   13777 main.go:141] libmachine: (addons-306463) DBG | unable to find host DHCP lease matching {name: "addons-306463", mac: "52:54:00:74:46:16", ip: "192.168.39.144"} in network mk-addons-306463
	I0910 17:29:47.052549   13777 main.go:141] libmachine: (addons-306463) DBG | Getting to WaitForSSH function...
	I0910 17:29:47.052583   13777 main.go:141] libmachine: (addons-306463) Reserved static IP address: 192.168.39.144
	I0910 17:29:47.052599   13777 main.go:141] libmachine: (addons-306463) Waiting for SSH to be available...
	I0910 17:29:47.055206   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.055721   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.055749   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.055768   13777 main.go:141] libmachine: (addons-306463) DBG | Using SSH client type: external
	I0910 17:29:47.055784   13777 main.go:141] libmachine: (addons-306463) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa (-rw-------)
	I0910 17:29:47.055817   13777 main.go:141] libmachine: (addons-306463) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:29:47.055833   13777 main.go:141] libmachine: (addons-306463) DBG | About to run SSH command:
	I0910 17:29:47.055847   13777 main.go:141] libmachine: (addons-306463) DBG | exit 0
	I0910 17:29:47.189212   13777 main.go:141] libmachine: (addons-306463) DBG | SSH cmd err, output: <nil>: 
	I0910 17:29:47.189498   13777 main.go:141] libmachine: (addons-306463) KVM machine creation complete!
	I0910 17:29:47.189774   13777 main.go:141] libmachine: (addons-306463) Calling .GetConfigRaw
	I0910 17:29:47.190322   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:47.190546   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:47.190703   13777 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:29:47.190718   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:29:47.191953   13777 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:29:47.191983   13777 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:29:47.191990   13777 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:29:47.192000   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.194176   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.194550   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.194580   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.194727   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.194890   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.195040   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.195167   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.195310   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.195466   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.195475   13777 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:29:47.296268   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:29:47.296287   13777 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:29:47.296294   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.298863   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.299207   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.299231   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.299390   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.299581   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.299710   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.299846   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.300038   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.300248   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.300264   13777 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:29:47.401977   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:29:47.402066   13777 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:29:47.402080   13777 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:29:47.402093   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:47.402339   13777 buildroot.go:166] provisioning hostname "addons-306463"
	I0910 17:29:47.402369   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:47.402589   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.404883   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.405227   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.405262   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.405351   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.405496   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.405637   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.405765   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.406035   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.406187   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.406198   13777 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-306463 && echo "addons-306463" | sudo tee /etc/hostname
	I0910 17:29:47.519126   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-306463
	
	I0910 17:29:47.519148   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.521835   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.522126   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.522165   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.522331   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.522503   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.522688   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.522820   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.522981   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.523132   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.523148   13777 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-306463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-306463/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-306463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:29:47.634728   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:29:47.634773   13777 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:29:47.634798   13777 buildroot.go:174] setting up certificates
	I0910 17:29:47.634811   13777 provision.go:84] configureAuth start
	I0910 17:29:47.634820   13777 main.go:141] libmachine: (addons-306463) Calling .GetMachineName
	I0910 17:29:47.635082   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:47.637636   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.638056   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.638081   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.638266   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.640398   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.640703   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.640732   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.640867   13777 provision.go:143] copyHostCerts
	I0910 17:29:47.640932   13777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:29:47.641095   13777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:29:47.641166   13777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:29:47.641219   13777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.addons-306463 san=[127.0.0.1 192.168.39.144 addons-306463 localhost minikube]
	I0910 17:29:47.725425   13777 provision.go:177] copyRemoteCerts
	I0910 17:29:47.725479   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:29:47.725499   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.728270   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.728605   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.728635   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.728841   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.729028   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.729224   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.729412   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:47.812673   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:29:47.838502   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:29:47.861372   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 17:29:47.884280   13777 provision.go:87] duration metric: took 249.455962ms to configureAuth
	I0910 17:29:47.884302   13777 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:29:47.884440   13777 config.go:182] Loaded profile config "addons-306463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:29:47.884509   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:47.887000   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.887356   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:47.887385   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:47.887546   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:47.887712   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.887871   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:47.888039   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:47.888187   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.888352   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:47.888365   13777 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 17:29:48.228474   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 17:29:48.228497   13777 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:29:48.228507   13777 main.go:141] libmachine: (addons-306463) Calling .GetURL
	I0910 17:29:48.229870   13777 main.go:141] libmachine: (addons-306463) DBG | Using libvirt version 6000000
	I0910 17:29:48.232480   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.232820   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.232841   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.233000   13777 main.go:141] libmachine: Docker is up and running!
	I0910 17:29:48.233010   13777 main.go:141] libmachine: Reticulating splines...
	I0910 17:29:48.233016   13777 client.go:171] duration metric: took 25.470105424s to LocalClient.Create
	I0910 17:29:48.233036   13777 start.go:167] duration metric: took 25.470181661s to libmachine.API.Create "addons-306463"
	I0910 17:29:48.233049   13777 start.go:293] postStartSetup for "addons-306463" (driver="kvm2")
	I0910 17:29:48.233063   13777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:29:48.233098   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.233339   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:29:48.233365   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.235691   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.236027   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.236056   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.236234   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.236415   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.236578   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.236717   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:48.314956   13777 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:29:48.319200   13777 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:29:48.319217   13777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 17:29:48.319286   13777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 17:29:48.319313   13777 start.go:296] duration metric: took 86.256331ms for postStartSetup
	I0910 17:29:48.319357   13777 main.go:141] libmachine: (addons-306463) Calling .GetConfigRaw
	I0910 17:29:48.319875   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:48.322245   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.322628   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.322656   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.322871   13777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/config.json ...
	I0910 17:29:48.323037   13777 start.go:128] duration metric: took 25.577048673s to createHost
	I0910 17:29:48.323063   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.325320   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.325645   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.325671   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.325773   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.325947   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.326098   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.326209   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.326331   13777 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:48.326533   13777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0910 17:29:48.326545   13777 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:29:48.425744   13777 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725989388.402057522
	
	I0910 17:29:48.425768   13777 fix.go:216] guest clock: 1725989388.402057522
	I0910 17:29:48.425778   13777 fix.go:229] Guest: 2024-09-10 17:29:48.402057522 +0000 UTC Remote: 2024-09-10 17:29:48.323049297 +0000 UTC m=+25.672610756 (delta=79.008225ms)
	I0910 17:29:48.425835   13777 fix.go:200] guest clock delta is within tolerance: 79.008225ms
	I0910 17:29:48.425843   13777 start.go:83] releasing machines lock for "addons-306463", held for 25.679951591s
	I0910 17:29:48.425876   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.426150   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:48.428633   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.428887   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.428917   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.429038   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.429469   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.429618   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:29:48.429702   13777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:29:48.429752   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.429808   13777 ssh_runner.go:195] Run: cat /version.json
	I0910 17:29:48.429830   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:29:48.432215   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432477   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432509   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.432533   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432629   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.432809   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.432852   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:48.432885   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:48.432948   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.433024   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:29:48.433123   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:48.433223   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:29:48.433357   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:29:48.433529   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:29:48.519560   13777 ssh_runner.go:195] Run: systemctl --version
	I0910 17:29:48.543890   13777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 17:29:48.713886   13777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:29:48.719987   13777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:29:48.720039   13777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:29:48.736004   13777 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:29:48.736022   13777 start.go:495] detecting cgroup driver to use...
	I0910 17:29:48.736067   13777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:29:48.752773   13777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:29:48.766717   13777 docker.go:217] disabling cri-docker service (if available) ...
	I0910 17:29:48.766772   13777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 17:29:48.780643   13777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 17:29:48.794503   13777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 17:29:48.918085   13777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 17:29:49.086620   13777 docker.go:233] disabling docker service ...
	I0910 17:29:49.086682   13777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 17:29:49.100274   13777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 17:29:49.112877   13777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 17:29:49.235428   13777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 17:29:49.349493   13777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 17:29:49.363676   13777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:29:49.381290   13777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 17:29:49.381345   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.391264   13777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 17:29:49.391322   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.401028   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.410592   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.420351   13777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:29:49.430171   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.439789   13777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.455759   13777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:29:49.465551   13777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:29:49.474306   13777 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 17:29:49.474354   13777 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 17:29:49.487232   13777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:29:49.496150   13777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:49.606336   13777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 17:29:49.695242   13777 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 17:29:49.695340   13777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 17:29:49.699902   13777 start.go:563] Will wait 60s for crictl version
	I0910 17:29:49.699961   13777 ssh_runner.go:195] Run: which crictl
	I0910 17:29:49.703479   13777 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:29:49.744817   13777 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 17:29:49.744937   13777 ssh_runner.go:195] Run: crio --version
	I0910 17:29:49.773082   13777 ssh_runner.go:195] Run: crio --version
	I0910 17:29:49.804181   13777 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 17:29:49.805563   13777 main.go:141] libmachine: (addons-306463) Calling .GetIP
	I0910 17:29:49.808022   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:49.808405   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:29:49.808439   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:29:49.808624   13777 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:29:49.812736   13777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:29:49.825102   13777 kubeadm.go:883] updating cluster {Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 17:29:49.825212   13777 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:29:49.825256   13777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:29:49.856852   13777 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 17:29:49.856923   13777 ssh_runner.go:195] Run: which lz4
	I0910 17:29:49.860976   13777 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 17:29:49.865045   13777 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 17:29:49.865078   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 17:29:51.093518   13777 crio.go:462] duration metric: took 1.232563952s to copy over tarball
	I0910 17:29:51.093585   13777 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 17:29:53.221638   13777 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.128025242s)
	I0910 17:29:53.221664   13777 crio.go:469] duration metric: took 2.128123943s to extract the tarball
	I0910 17:29:53.221671   13777 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 17:29:53.258544   13777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:29:53.300100   13777 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 17:29:53.300128   13777 cache_images.go:84] Images are preloaded, skipping loading
	I0910 17:29:53.300138   13777 kubeadm.go:934] updating node { 192.168.39.144 8443 v1.31.0 crio true true} ...
	I0910 17:29:53.300253   13777 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-306463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:29:53.300317   13777 ssh_runner.go:195] Run: crio config
	I0910 17:29:53.353856   13777 cni.go:84] Creating CNI manager for ""
	I0910 17:29:53.353875   13777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:29:53.353885   13777 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 17:29:53.353905   13777 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-306463 NodeName:addons-306463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 17:29:53.354032   13777 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-306463"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 17:29:53.354084   13777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:29:53.364093   13777 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 17:29:53.364159   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 17:29:53.373663   13777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0910 17:29:53.391325   13777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:29:53.408601   13777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0910 17:29:53.428267   13777 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0910 17:29:53.432004   13777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:29:53.443494   13777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:53.565386   13777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:29:53.582101   13777 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463 for IP: 192.168.39.144
	I0910 17:29:53.582140   13777 certs.go:194] generating shared ca certs ...
	I0910 17:29:53.582161   13777 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:53.582320   13777 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 17:29:53.851863   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt ...
	I0910 17:29:53.851887   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt: {Name:mk391b947a0b07d47c3f48605c2169ac6bbd02dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:53.852030   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key ...
	I0910 17:29:53.852040   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key: {Name:mke85b1ed3e4a8e9bbc933ab9200470c82fbf9f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:53.852110   13777 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 17:29:54.025549   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt ...
	I0910 17:29:54.025576   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt: {Name:mkba6d1cf3fb11e6bd8f0b60294ec684bf33d7c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.025720   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key ...
	I0910 17:29:54.025730   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key: {Name:mke1e40be102cd0ea85ebf8e9804fe7294de9b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.025806   13777 certs.go:256] generating profile certs ...
	I0910 17:29:54.025854   13777 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.key
	I0910 17:29:54.025873   13777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt with IP's: []
	I0910 17:29:54.256975   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt ...
	I0910 17:29:54.257001   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: {Name:mkddd504fb642c11276cd07fd6115fe4786a05eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.257158   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.key ...
	I0910 17:29:54.257169   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.key: {Name:mkd6342dd54701d46a2aa87d79fc772b251c8012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.257264   13777 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e
	I0910 17:29:54.257283   13777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.144]
	I0910 17:29:54.390720   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e ...
	I0910 17:29:54.390752   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e: {Name:mkef82fca0b89b824a8a6247fbc2d43a96f4692c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.390921   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e ...
	I0910 17:29:54.390940   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e: {Name:mk548882b9e102cf63bf5a2676b5044c14781eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.391030   13777 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt.8f6ba92e -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt
	I0910 17:29:54.391118   13777 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key.8f6ba92e -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key
	I0910 17:29:54.391182   13777 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key
	I0910 17:29:54.391204   13777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt with IP's: []
	I0910 17:29:54.752265   13777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt ...
	I0910 17:29:54.752292   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt: {Name:mkc361744979bc8404f5a5aaa8788af34523a213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.752452   13777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key ...
	I0910 17:29:54.752468   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key: {Name:mkcded4c85166d07f3f2b1b8ff068b03a9d76311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.752681   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 17:29:54.752717   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:29:54.752753   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:29:54.752785   13777 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 17:29:54.753440   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:29:54.779118   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:29:54.803026   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:29:54.825435   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 17:29:54.848031   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0910 17:29:54.872008   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 17:29:54.897479   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:29:54.922879   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:29:54.947831   13777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:29:54.974722   13777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 17:29:54.994110   13777 ssh_runner.go:195] Run: openssl version
	I0910 17:29:55.000395   13777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:29:55.013767   13777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:55.018473   13777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:55.018531   13777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:55.024792   13777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:29:55.035682   13777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:29:55.039752   13777 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:29:55.039807   13777 kubeadm.go:392] StartCluster: {Name:addons-306463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-306463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:55.039892   13777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 17:29:55.039955   13777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 17:29:55.094283   13777 cri.go:89] found id: ""
	I0910 17:29:55.094342   13777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 17:29:55.112402   13777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 17:29:55.123314   13777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 17:29:55.135689   13777 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 17:29:55.135707   13777 kubeadm.go:157] found existing configuration files:
	
	I0910 17:29:55.135753   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 17:29:55.144757   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 17:29:55.144811   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 17:29:55.154051   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 17:29:55.162743   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 17:29:55.162794   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 17:29:55.171799   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 17:29:55.180529   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 17:29:55.180583   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 17:29:55.191873   13777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 17:29:55.200886   13777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 17:29:55.200937   13777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 17:29:55.210181   13777 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 17:29:55.258814   13777 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 17:29:55.258968   13777 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 17:29:55.371415   13777 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 17:29:55.371545   13777 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 17:29:55.371669   13777 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 17:29:55.384083   13777 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 17:29:55.408465   13777 out.go:235]   - Generating certificates and keys ...
	I0910 17:29:55.408589   13777 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 17:29:55.408665   13777 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 17:29:55.897673   13777 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 17:29:56.059223   13777 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 17:29:56.278032   13777 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 17:29:56.441145   13777 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 17:29:56.605793   13777 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 17:29:56.605947   13777 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-306463 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0910 17:29:56.790976   13777 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 17:29:56.791214   13777 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-306463 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0910 17:29:56.836139   13777 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 17:29:57.046320   13777 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 17:29:57.222692   13777 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 17:29:57.222801   13777 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 17:29:57.462021   13777 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 17:29:57.829972   13777 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 17:29:57.954467   13777 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 17:29:58.166081   13777 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 17:29:58.224456   13777 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 17:29:58.224997   13777 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 17:29:58.227323   13777 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 17:29:58.229164   13777 out.go:235]   - Booting up control plane ...
	I0910 17:29:58.229261   13777 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 17:29:58.229329   13777 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 17:29:58.229426   13777 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 17:29:58.245412   13777 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 17:29:58.251271   13777 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 17:29:58.251364   13777 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 17:29:58.388887   13777 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 17:29:58.389039   13777 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 17:29:58.890585   13777 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.078984ms
	I0910 17:29:58.890687   13777 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 17:30:03.392681   13777 kubeadm.go:310] [api-check] The API server is healthy after 4.502932782s
	I0910 17:30:03.406115   13777 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 17:30:03.420124   13777 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 17:30:03.449395   13777 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 17:30:03.449667   13777 kubeadm.go:310] [mark-control-plane] Marking the node addons-306463 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 17:30:03.460309   13777 kubeadm.go:310] [bootstrap-token] Using token: 457t84.d2zxow5i3fyaif8g
	I0910 17:30:03.461609   13777 out.go:235]   - Configuring RBAC rules ...
	I0910 17:30:03.461716   13777 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 17:30:03.465462   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 17:30:03.474356   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 17:30:03.477241   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 17:30:03.483988   13777 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 17:30:03.489715   13777 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 17:30:03.799075   13777 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 17:30:04.227910   13777 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 17:30:04.798072   13777 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 17:30:04.798097   13777 kubeadm.go:310] 
	I0910 17:30:04.798189   13777 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 17:30:04.798211   13777 kubeadm.go:310] 
	I0910 17:30:04.798306   13777 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 17:30:04.798317   13777 kubeadm.go:310] 
	I0910 17:30:04.798366   13777 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 17:30:04.798449   13777 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 17:30:04.798534   13777 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 17:30:04.798547   13777 kubeadm.go:310] 
	I0910 17:30:04.798615   13777 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 17:30:04.798626   13777 kubeadm.go:310] 
	I0910 17:30:04.798664   13777 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 17:30:04.798671   13777 kubeadm.go:310] 
	I0910 17:30:04.798731   13777 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 17:30:04.798795   13777 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 17:30:04.798868   13777 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 17:30:04.798878   13777 kubeadm.go:310] 
	I0910 17:30:04.798966   13777 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 17:30:04.799060   13777 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 17:30:04.799070   13777 kubeadm.go:310] 
	I0910 17:30:04.799182   13777 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 457t84.d2zxow5i3fyaif8g \
	I0910 17:30:04.799300   13777 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 17:30:04.799341   13777 kubeadm.go:310] 	--control-plane 
	I0910 17:30:04.799355   13777 kubeadm.go:310] 
	I0910 17:30:04.799468   13777 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 17:30:04.799478   13777 kubeadm.go:310] 
	I0910 17:30:04.799599   13777 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 457t84.d2zxow5i3fyaif8g \
	I0910 17:30:04.799726   13777 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 17:30:04.800658   13777 kubeadm.go:310] W0910 17:29:55.239705     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:04.800920   13777 kubeadm.go:310] W0910 17:29:55.240584     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:04.801008   13777 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 17:30:04.801028   13777 cni.go:84] Creating CNI manager for ""
	I0910 17:30:04.801040   13777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:30:04.802881   13777 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 17:30:04.804227   13777 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 17:30:04.816674   13777 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 17:30:04.835609   13777 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 17:30:04.835737   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:04.835739   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-306463 minikube.k8s.io/updated_at=2024_09_10T17_30_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=addons-306463 minikube.k8s.io/primary=true
	I0910 17:30:04.865385   13777 ops.go:34] apiserver oom_adj: -16
	I0910 17:30:04.960966   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:05.461285   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:05.961804   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:06.461686   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:06.961554   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:07.461362   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:07.961164   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:08.461339   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:08.961327   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:09.461036   13777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:09.564661   13777 kubeadm.go:1113] duration metric: took 4.728972481s to wait for elevateKubeSystemPrivileges
	I0910 17:30:09.564692   13777 kubeadm.go:394] duration metric: took 14.524892016s to StartCluster
	I0910 17:30:09.564710   13777 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.564844   13777 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:30:09.565243   13777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:09.565462   13777 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:30:09.565495   13777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 17:30:09.565538   13777 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0910 17:30:09.565627   13777 addons.go:69] Setting cloud-spanner=true in profile "addons-306463"
	I0910 17:30:09.565651   13777 addons.go:69] Setting yakd=true in profile "addons-306463"
	I0910 17:30:09.565662   13777 addons.go:234] Setting addon cloud-spanner=true in "addons-306463"
	I0910 17:30:09.565655   13777 addons.go:69] Setting inspektor-gadget=true in profile "addons-306463"
	I0910 17:30:09.565675   13777 addons.go:234] Setting addon yakd=true in "addons-306463"
	I0910 17:30:09.565670   13777 addons.go:69] Setting gcp-auth=true in profile "addons-306463"
	I0910 17:30:09.565685   13777 addons.go:234] Setting addon inspektor-gadget=true in "addons-306463"
	I0910 17:30:09.565692   13777 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-306463"
	I0910 17:30:09.565703   13777 mustload.go:65] Loading cluster: addons-306463
	I0910 17:30:09.565700   13777 config.go:182] Loaded profile config "addons-306463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:30:09.565711   13777 addons.go:69] Setting metrics-server=true in profile "addons-306463"
	I0910 17:30:09.565715   13777 addons.go:69] Setting helm-tiller=true in profile "addons-306463"
	I0910 17:30:09.565720   13777 addons.go:69] Setting storage-provisioner=true in profile "addons-306463"
	I0910 17:30:09.565734   13777 addons.go:234] Setting addon metrics-server=true in "addons-306463"
	I0910 17:30:09.565738   13777 addons.go:234] Setting addon storage-provisioner=true in "addons-306463"
	I0910 17:30:09.565740   13777 addons.go:69] Setting ingress=true in profile "addons-306463"
	I0910 17:30:09.565740   13777 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-306463"
	I0910 17:30:09.565753   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565754   13777 addons.go:69] Setting volcano=true in profile "addons-306463"
	I0910 17:30:09.565760   13777 addons.go:69] Setting ingress-dns=true in profile "addons-306463"
	I0910 17:30:09.565765   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565765   13777 addons.go:69] Setting registry=true in profile "addons-306463"
	I0910 17:30:09.565776   13777 addons.go:234] Setting addon volcano=true in "addons-306463"
	I0910 17:30:09.565760   13777 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-306463"
	I0910 17:30:09.565783   13777 addons.go:234] Setting addon registry=true in "addons-306463"
	I0910 17:30:09.565793   13777 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-306463"
	I0910 17:30:09.565801   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565809   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565810   13777 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-306463"
	I0910 17:30:09.565834   13777 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-306463"
	I0910 17:30:09.565735   13777 addons.go:234] Setting addon helm-tiller=true in "addons-306463"
	I0910 17:30:09.565889   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565897   13777 config.go:182] Loaded profile config "addons-306463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:30:09.565777   13777 addons.go:234] Setting addon ingress-dns=true in "addons-306463"
	I0910 17:30:09.566180   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566186   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566191   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566190   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566210   13777 addons.go:69] Setting default-storageclass=true in profile "addons-306463"
	I0910 17:30:09.566212   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566220   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566224   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566226   13777 addons.go:69] Setting volumesnapshots=true in profile "addons-306463"
	I0910 17:30:09.565707   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565756   13777 addons.go:234] Setting addon ingress=true in "addons-306463"
	I0910 17:30:09.566246   13777 addons.go:234] Setting addon volumesnapshots=true in "addons-306463"
	I0910 17:30:09.565705   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.565801   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566276   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566214   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566431   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.565756   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566494   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566227   13777 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-306463"
	I0910 17:30:09.565709   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566515   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566518   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566594   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566617   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566232   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566712   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566737   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566765   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.566781   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566800   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566821   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566831   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566843   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566802   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566880   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566882   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566891   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566902   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.566910   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.566935   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.567017   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.567048   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.567756   13777 out.go:177] * Verifying Kubernetes components...
	I0910 17:30:09.569434   13777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:09.582777   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0910 17:30:09.589426   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.589457   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.589941   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.591066   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.591086   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.593346   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.593990   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.594031   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.614952   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0910 17:30:09.615511   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.616077   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.616100   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.625500   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.626139   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.626180   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.626663   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0910 17:30:09.627167   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.627742   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.627760   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.628137   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.628731   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.628754   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.628942   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I0910 17:30:09.629508   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.629998   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.630014   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.630491   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.631027   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.631063   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.631232   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33169
	I0910 17:30:09.631984   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.632597   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.632614   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.633144   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0910 17:30:09.633568   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.634036   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.634051   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.634409   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.634947   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.634984   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.635276   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.635474   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.639823   13777 addons.go:234] Setting addon default-storageclass=true in "addons-306463"
	I0910 17:30:09.639870   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.640208   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.640228   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.649585   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0910 17:30:09.650122   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.650724   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.650742   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.651106   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.651353   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43517
	I0910 17:30:09.651675   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.651705   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.651834   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.652091   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0910 17:30:09.652330   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.652346   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.652505   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.653024   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.653041   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.653481   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I0910 17:30:09.653910   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.654114   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.654913   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32845
	I0910 17:30:09.655435   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.655964   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.655981   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.656044   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.656117   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.656812   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.656832   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.657418   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.657493   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0910 17:30:09.657907   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.658557   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.658600   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.658821   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0910 17:30:09.659275   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.659751   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.659768   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.660535   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.660593   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0910 17:30:09.661560   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.661593   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.661831   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.661907   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.662410   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.662439   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.662442   13777 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0910 17:30:09.662415   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.662611   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.662676   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0910 17:30:09.662687   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.663387   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.663450   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.663526   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.663886   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.664005   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.664015   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.664124   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.664133   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.664307   13777 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:09.664322   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0910 17:30:09.664338   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.664427   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.664960   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.665000   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.665625   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.665808   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.666537   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.666894   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.666927   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.667412   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0910 17:30:09.667675   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.668696   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.669275   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.669291   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.669343   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.669546   13777 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0910 17:30:09.670692   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 17:30:09.670708   13777 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 17:30:09.670727   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.670952   13777 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-306463"
	I0910 17:30:09.670991   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:09.671783   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.671816   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.672717   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.673017   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.673445   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.673492   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.673650   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.673854   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.674003   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.676862   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.676873   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0910 17:30:09.676918   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0910 17:30:09.676994   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.677003   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.677025   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.677041   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.677261   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.677376   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.677625   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.677718   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.678066   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44557
	I0910 17:30:09.678469   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.678717   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.678737   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.678906   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.678926   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.679232   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.679271   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.679735   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.679770   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.679844   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.679855   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.680043   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.680698   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.681570   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.681611   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.681815   13777 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0910 17:30:09.681916   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.682688   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.682726   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.683190   13777 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:09.683203   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0910 17:30:09.683218   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.686842   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.687460   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.687482   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.687670   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.687848   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.688024   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.688177   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.694726   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0910 17:30:09.695273   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.695643   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36063
	I0910 17:30:09.696099   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.696281   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.696293   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.696679   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.696746   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
	I0910 17:30:09.696887   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.698037   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.698762   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.698922   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.698941   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.699119   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.699136   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.699179   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0910 17:30:09.699522   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.699585   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.699840   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.700601   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.700644   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.700874   13777 out.go:177]   - Using image docker.io/registry:2.8.3
	I0910 17:30:09.700998   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.701016   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I0910 17:30:09.701360   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.701612   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I0910 17:30:09.701832   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.701844   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.702101   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.702118   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.702224   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.702441   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.703052   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.703125   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.703591   13777 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0910 17:30:09.704094   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.704109   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.704260   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0910 17:30:09.704704   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.704740   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.704775   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.705063   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0910 17:30:09.705196   13777 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 17:30:09.705211   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0910 17:30:09.705219   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0910 17:30:09.705226   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.705196   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.705342   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.706377   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.706400   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 17:30:09.706411   13777 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0910 17:30:09.706426   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.706440   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.706471   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.706482   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.707075   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.707216   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.707235   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.707300   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.707366   13777 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0910 17:30:09.707624   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.707822   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.708675   13777 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:09.708690   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0910 17:30:09.708705   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.712661   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713131   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.713163   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713366   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.713421   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.713480   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713861   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:09.713873   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:09.713918   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.713956   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.713983   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.714002   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.714031   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.714206   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.714247   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.714468   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:09.714499   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0910 17:30:09.714592   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:09.714604   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:09.714613   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:09.714627   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:09.714682   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.714871   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.714961   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.714997   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.715045   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.715064   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.715156   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.715206   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.715419   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.715432   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.715492   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:09.715508   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:09.715557   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	W0910 17:30:09.715586   13777 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0910 17:30:09.715674   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.715712   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.715796   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.716017   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.716559   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:09.716638   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0910 17:30:09.717659   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.717965   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.718259   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.719379   13777 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0910 17:30:09.719428   13777 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 17:30:09.719443   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0910 17:30:09.719454   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0910 17:30:09.720905   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I0910 17:30:09.721013   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0910 17:30:09.721027   13777 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0910 17:30:09.721044   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.721066   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0910 17:30:09.721206   13777 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:09.721216   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 17:30:09.721229   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.721849   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.722165   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.722359   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.722466   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.722470   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0910 17:30:09.722708   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:09.722753   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0910 17:30:09.723597   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.723648   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.723855   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.724282   13777 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:09.724307   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0910 17:30:09.724324   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.724525   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.725165   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0910 17:30:09.725201   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.725218   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.725561   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.726077   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.726104   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.726140   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.726215   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.726601   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.726630   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.726642   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.726678   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:09.726725   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:09.726825   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.727007   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.727185   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.727319   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.727446   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.727475   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.727554   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0910 17:30:09.727608   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.727780   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.728076   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.728343   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.728947   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.729258   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.729880   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.729952   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.730000   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.730827   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0910 17:30:09.731231   13777 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0910 17:30:09.731583   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
	I0910 17:30:09.731692   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.732073   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.732112   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.732762   13777 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0910 17:30:09.732777   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0910 17:30:09.732794   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.733213   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.733241   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.733392   13777 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0910 17:30:09.733608   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.733837   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0910 17:30:09.733864   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.734595   13777 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0910 17:30:09.733877   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.734613   13777 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0910 17:30:09.734632   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.734774   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.736617   13777 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0910 17:30:09.737387   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.737645   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.737692   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 17:30:09.737715   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0910 17:30:09.737739   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.737924   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.737974   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.738098   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.738264   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.738435   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.738443   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.738478   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.738597   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.738607   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.738839   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.738982   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.739120   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.740323   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
	I0910 17:30:09.740652   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.740693   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.741101   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.741129   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.741227   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.741442   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.741462   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.741464   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.741593   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.741743   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.741743   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:09.741915   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.743141   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.743345   13777 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:09.743359   13777 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 17:30:09.743372   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.746708   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.746740   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.746763   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.746782   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.746853   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.746981   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.747118   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	W0910 17:30:09.748150   13777 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56800->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.748170   13777 retry.go:31] will retry after 285.141352ms: ssh: handshake failed: read tcp 192.168.39.1:56800->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.753685   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I0910 17:30:09.753988   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:09.754407   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:09.754424   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:09.754715   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:09.754955   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:09.756271   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:09.758237   13777 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0910 17:30:09.759829   13777 out.go:177]   - Using image docker.io/busybox:stable
	I0910 17:30:09.761821   13777 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:09.761840   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0910 17:30:09.761857   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:09.764453   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.764819   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:09.764843   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:09.764947   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:09.765134   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:09.765249   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:09.765359   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	W0910 17:30:09.765990   13777 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56802->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.766007   13777 retry.go:31] will retry after 202.018394ms: ssh: handshake failed: read tcp 192.168.39.1:56802->192.168.39.144:22: read: connection reset by peer
	W0910 17:30:09.969022   13777 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56808->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:09.969051   13777 retry.go:31] will retry after 235.947645ms: ssh: handshake failed: read tcp 192.168.39.1:56808->192.168.39.144:22: read: connection reset by peer
	I0910 17:30:10.094763   13777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:30:10.094906   13777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 17:30:10.122256   13777 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 17:30:10.122278   13777 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0910 17:30:10.186667   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:10.191366   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:10.193981   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 17:30:10.193996   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0910 17:30:10.259618   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:10.270667   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0910 17:30:10.270685   13777 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0910 17:30:10.276555   13777 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0910 17:30:10.276571   13777 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0910 17:30:10.310365   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 17:30:10.310384   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0910 17:30:10.315555   13777 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 17:30:10.315573   13777 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0910 17:30:10.352407   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:10.369092   13777 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 17:30:10.369117   13777 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0910 17:30:10.381559   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:10.401157   13777 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0910 17:30:10.401178   13777 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0910 17:30:10.403491   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 17:30:10.403515   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0910 17:30:10.472910   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0910 17:30:10.472930   13777 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0910 17:30:10.489850   13777 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:10.489869   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0910 17:30:10.511021   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:10.534214   13777 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:10.534238   13777 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0910 17:30:10.554150   13777 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0910 17:30:10.554167   13777 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0910 17:30:10.557521   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 17:30:10.557543   13777 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 17:30:10.572746   13777 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 17:30:10.572764   13777 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0910 17:30:10.573994   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 17:30:10.574011   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0910 17:30:10.704085   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0910 17:30:10.704110   13777 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0910 17:30:10.727766   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:10.747348   13777 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:10.747374   13777 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 17:30:10.763336   13777 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 17:30:10.763355   13777 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0910 17:30:10.766511   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:10.774570   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 17:30:10.774593   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0910 17:30:10.782428   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 17:30:10.782444   13777 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0910 17:30:10.809598   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:11.063857   13777 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 17:30:11.063892   13777 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0910 17:30:11.074085   13777 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:11.074112   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0910 17:30:11.088999   13777 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 17:30:11.089024   13777 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0910 17:30:11.100617   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:11.112993   13777 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:11.113018   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0910 17:30:11.298472   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 17:30:11.298502   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0910 17:30:11.316663   13777 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 17:30:11.316693   13777 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0910 17:30:11.369539   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:11.383347   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:11.653526   13777 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0910 17:30:11.653554   13777 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0910 17:30:11.678871   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 17:30:11.678895   13777 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0910 17:30:11.862075   13777 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:11.862095   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0910 17:30:11.921871   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 17:30:11.921897   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0910 17:30:12.123524   13777 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.028712837s)
	I0910 17:30:12.123546   13777 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.02861212s)
	I0910 17:30:12.123568   13777 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0910 17:30:12.138011   13777 node_ready.go:35] waiting up to 6m0s for node "addons-306463" to be "Ready" ...
	I0910 17:30:12.143070   13777 node_ready.go:49] node "addons-306463" has status "Ready":"True"
	I0910 17:30:12.143098   13777 node_ready.go:38] duration metric: took 5.040837ms for node "addons-306463" to be "Ready" ...
	I0910 17:30:12.143109   13777 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:12.155112   13777 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:12.301578   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 17:30:12.301604   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0910 17:30:12.345205   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:12.640873   13777 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-306463" context rescaled to 1 replicas
	I0910 17:30:12.648121   13777 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:12.648142   13777 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0910 17:30:13.153205   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:13.916729   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.73001943s)
	I0910 17:30:13.916745   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.725354593s)
	I0910 17:30:13.916787   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.916800   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.916812   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.916818   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.657160792s)
	I0910 17:30:13.916832   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.916840   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.916849   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.917138   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917155   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917164   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.917162   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:13.917172   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.917292   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:13.917292   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917312   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917321   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.917329   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.917336   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917347   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917419   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:13.917426   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.917458   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.917492   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:13.917516   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:13.919078   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.919092   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.919112   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:13.919122   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:13.919092   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:14.275505   13777 pod_ready.go:103] pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:14.583313   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.230869529s)
	I0910 17:30:14.583362   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:14.583374   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:14.583656   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:14.583673   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:14.583683   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:14.583691   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:14.583884   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:14.583898   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:16.178328   13777 pod_ready.go:93] pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:16.178361   13777 pod_ready.go:82] duration metric: took 4.02322283s for pod "coredns-6f6b679f8f-c5qxp" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.178376   13777 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.744986   13777 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0910 17:30:16.745032   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:16.748322   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:16.748729   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:16.748755   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:16.748928   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:16.749117   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:16.749277   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:16.749413   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:16.985599   13777 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0910 17:30:17.019642   13777 addons.go:234] Setting addon gcp-auth=true in "addons-306463"
	I0910 17:30:17.019684   13777 host.go:66] Checking if "addons-306463" exists ...
	I0910 17:30:17.020002   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:17.020027   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:17.035756   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41545
	I0910 17:30:17.036129   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:17.036614   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:17.036638   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:17.036957   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:17.037567   13777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:30:17.037606   13777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:17.052624   13777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I0910 17:30:17.053092   13777 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:17.053555   13777 main.go:141] libmachine: Using API Version  1
	I0910 17:30:17.053575   13777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:17.053874   13777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:17.054058   13777 main.go:141] libmachine: (addons-306463) Calling .GetState
	I0910 17:30:17.055568   13777 main.go:141] libmachine: (addons-306463) Calling .DriverName
	I0910 17:30:17.055797   13777 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0910 17:30:17.055824   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHHostname
	I0910 17:30:17.058347   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:17.058720   13777 main.go:141] libmachine: (addons-306463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:46:16", ip: ""} in network mk-addons-306463: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:38 +0000 UTC Type:0 Mac:52:54:00:74:46:16 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-306463 Clientid:01:52:54:00:74:46:16}
	I0910 17:30:17.058755   13777 main.go:141] libmachine: (addons-306463) DBG | domain addons-306463 has defined IP address 192.168.39.144 and MAC address 52:54:00:74:46:16 in network mk-addons-306463
	I0910 17:30:17.058878   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHPort
	I0910 17:30:17.059056   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHKeyPath
	I0910 17:30:17.059232   13777 main.go:141] libmachine: (addons-306463) Calling .GetSSHUsername
	I0910 17:30:17.059408   13777 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/addons-306463/id_rsa Username:docker}
	I0910 17:30:18.294928   13777 pod_ready.go:103] pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:18.793144   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.411553079s)
	I0910 17:30:18.793145   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.282095983s)
	I0910 17:30:18.793236   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.065445297s)
	I0910 17:30:18.793270   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793187   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793285   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793310   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793340   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.026800859s)
	I0910 17:30:18.793371   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793387   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793269   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793447   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793468   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.692800645s)
	I0910 17:30:18.793374   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.983746942s)
	I0910 17:30:18.793499   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793508   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793513   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793517   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793601   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.424038762s)
	I0910 17:30:18.793624   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793633   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793677   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.793701   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.793737   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793764   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.793796   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.410424596s)
	W0910 17:30:18.793833   13777 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:18.793860   13777 retry.go:31] will retry after 281.684636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:18.793941   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.448707771s)
	I0910 17:30:18.793961   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.793971   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.794043   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.794051   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.794058   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.794066   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795483   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795531   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795547   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795569   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795575   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795583   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795590   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795649   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795657   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795658   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795665   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795672   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795682   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795689   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795696   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795703   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795713   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795732   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795744   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795751   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795757   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795762   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795771   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795781   13777 addons.go:475] Verifying addon ingress=true in "addons-306463"
	I0910 17:30:18.795793   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.795812   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795818   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795824   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795830   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795884   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.795900   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.795908   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.795914   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.795971   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796000   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796018   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796031   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796038   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796047   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796055   13777 addons.go:475] Verifying addon metrics-server=true in "addons-306463"
	I0910 17:30:18.796152   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796021   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796451   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796481   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796495   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796938   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.796966   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.796973   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.796992   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.797004   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.797213   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.797217   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.797239   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.797246   13777 addons.go:475] Verifying addon registry=true in "addons-306463"
	I0910 17:30:18.795865   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:18.798742   13777 out.go:177] * Verifying ingress addon...
	I0910 17:30:18.799682   13777 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-306463 service yakd-dashboard -n yakd-dashboard
	
	I0910 17:30:18.799716   13777 out.go:177] * Verifying registry addon...
	I0910 17:30:18.801342   13777 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0910 17:30:18.802106   13777 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0910 17:30:18.809767   13777 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0910 17:30:18.809787   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:18.811444   13777 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0910 17:30:18.811469   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:18.826959   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.826981   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.827246   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.827267   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	W0910 17:30:18.827341   13777 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0910 17:30:18.834146   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:18.834161   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:18.834395   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:18.834415   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:18.834429   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:19.076009   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:19.326915   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:19.327040   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:19.615946   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.462685919s)
	I0910 17:30:19.616011   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:19.616033   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:19.615967   13777 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.560143893s)
	I0910 17:30:19.616447   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:19.616479   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:19.616503   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:19.616512   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:19.616521   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:19.616744   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:19.616759   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:19.616776   13777 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-306463"
	I0910 17:30:19.617622   13777 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:19.618428   13777 out.go:177] * Verifying csi-hostpath-driver addon...
	I0910 17:30:19.620045   13777 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0910 17:30:19.621038   13777 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 17:30:19.621222   13777 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 17:30:19.621237   13777 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0910 17:30:19.662236   13777 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 17:30:19.662270   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:19.722439   13777 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 17:30:19.722462   13777 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0910 17:30:19.763288   13777 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:19.763308   13777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0910 17:30:19.814766   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:19.815036   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:19.834549   13777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:20.128981   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:20.307489   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:20.307877   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:20.625102   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:20.683791   13777 pod_ready.go:103] pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:20.806684   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:20.806816   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:20.823709   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.747658678s)
	I0910 17:30:20.823758   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:20.823770   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:20.824016   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:20.824031   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:20.824040   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:20.824048   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:20.824246   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:20.824312   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:20.824334   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:21.152748   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:21.258310   13777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.423679033s)
	I0910 17:30:21.258353   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:21.258363   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:21.258652   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:21.258672   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:21.258675   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:21.258682   13777 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:21.258781   13777 main.go:141] libmachine: (addons-306463) Calling .Close
	I0910 17:30:21.259002   13777 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:21.259047   13777 main.go:141] libmachine: (addons-306463) DBG | Closing plugin on server side
	I0910 17:30:21.259050   13777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:21.261123   13777 addons.go:475] Verifying addon gcp-auth=true in "addons-306463"
	I0910 17:30:21.262702   13777 out.go:177] * Verifying gcp-auth addon...
	I0910 17:30:21.265139   13777 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0910 17:30:21.309290   13777 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:30:21.309307   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:21.386582   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:21.386884   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:21.629140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:21.686431   13777 pod_ready.go:98] pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.144 HostIPs:[{IP:192.168.39
.144}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-10 17:30:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:13 +0000 UTC,FinishedAt:2024-09-10 17:30:19 +0000 UTC,ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa Started:0xc00269b790 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d3ebf0} {Name:kube-api-access-vvw44 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001d3ec10}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:21.686462   13777 pod_ready.go:82] duration metric: took 5.508078868s for pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace to be "Ready" ...
	E0910 17:30:21.686473   13777 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-fk47l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.144 HostIPs:[{IP:192.168.39.144}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-10 17:30:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:13 +0000 UTC,FinishedAt:2024-09-10 17:30:19 +0000 UTC,ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://2a84ae286f17289f7f3a5209cd0c7e4890910cd71d908afdc52a58de592bbefa Started:0xc00269b790 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d3ebf0} {Name:kube-api-access-vvw44 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001d3ec10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:21.686485   13777 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.694377   13777 pod_ready.go:93] pod "etcd-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.694399   13777 pod_ready.go:82] duration metric: took 7.904964ms for pod "etcd-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.694410   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.699906   13777 pod_ready.go:93] pod "kube-apiserver-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.699925   13777 pod_ready.go:82] duration metric: took 5.506518ms for pod "kube-apiserver-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.699935   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.706491   13777 pod_ready.go:93] pod "kube-controller-manager-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.706508   13777 pod_ready.go:82] duration metric: took 6.56701ms for pod "kube-controller-manager-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.706517   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-js72f" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.711913   13777 pod_ready.go:93] pod "kube-proxy-js72f" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:21.711927   13777 pod_ready.go:82] duration metric: took 5.405396ms for pod "kube-proxy-js72f" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.711934   13777 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:21.771105   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:21.806408   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:21.807158   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:22.082652   13777 pod_ready.go:93] pod "kube-scheduler-addons-306463" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:22.082672   13777 pod_ready.go:82] duration metric: took 370.731346ms for pod "kube-scheduler-addons-306463" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:22.082683   13777 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:22.127515   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:22.269247   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:22.306663   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:22.306817   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:22.626885   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:22.769155   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:22.806860   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:22.807059   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:23.126514   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:23.268573   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:23.304984   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:23.308344   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:23.625436   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:23.768625   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:23.806414   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:23.807737   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:24.089626   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:24.126099   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:24.269316   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:24.306325   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:24.307191   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:24.626187   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:24.769060   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:24.805608   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:24.805998   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.284162   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:25.284693   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:25.304402   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:25.305601   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.625547   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:25.769118   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:25.805736   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.806413   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:26.125645   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:26.269608   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:26.307645   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:26.310692   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:26.588316   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:26.625476   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:26.768854   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:26.805985   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:26.806757   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.126110   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:27.268618   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:27.305185   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:27.305610   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.625855   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:27.768850   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:27.806424   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.806708   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:28.126113   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:28.269445   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:28.306451   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:28.306949   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:28.589535   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:28.625966   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:28.769016   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:28.805194   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:28.806093   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:29.125865   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:29.268979   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:29.306285   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.307264   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:29.625480   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:29.768316   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:29.807378   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.807652   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:30.126183   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:30.268852   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:30.307999   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.309034   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:30.625705   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:30.768655   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:30.807245   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.807772   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.088566   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:31.125747   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:31.268110   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:31.309583   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.310629   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:31.665764   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:31.768905   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:31.804955   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.806706   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.125989   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:32.269609   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:32.307383   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:32.309129   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.626614   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:32.768068   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:32.806872   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.807203   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:33.089535   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:33.125706   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.269256   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:33.305975   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.306252   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:33.706857   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.769189   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:33.805877   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.808046   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:34.126107   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.269399   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:34.306128   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:34.306283   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.625316   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.769118   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:34.805784   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.806308   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:35.131152   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.269262   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:35.305790   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.306213   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:35.587677   13777 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:35.626384   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.769202   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:35.806266   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.806509   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.127407   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.270434   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:36.310101   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:36.311099   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.590031   13777 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:36.590052   13777 pod_ready.go:82] duration metric: took 14.507363417s for pod "nvidia-device-plugin-daemonset-smwnt" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:36.590060   13777 pod_ready.go:39] duration metric: took 24.446938548s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:36.590077   13777 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:30:36.590151   13777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:36.618197   13777 api_server.go:72] duration metric: took 27.052704342s to wait for apiserver process to appear ...
	I0910 17:30:36.618222   13777 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:30:36.618255   13777 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0910 17:30:36.624545   13777 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0910 17:30:36.625767   13777 api_server.go:141] control plane version: v1.31.0
	I0910 17:30:36.625787   13777 api_server.go:131] duration metric: took 7.55866ms to wait for apiserver health ...
	I0910 17:30:36.625795   13777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:30:36.628168   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.635782   13777 system_pods.go:59] 18 kube-system pods found
	I0910 17:30:36.635816   13777 system_pods.go:61] "coredns-6f6b679f8f-c5qxp" [5ce9784e-e567-4ff5-a7fc-cb8589c471c1] Running
	I0910 17:30:36.635828   13777 system_pods.go:61] "csi-hostpath-attacher-0" [e5afcda1-955a-445a-95b8-dc286510fa6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:36.635837   13777 system_pods.go:61] "csi-hostpath-resizer-0" [5ab24cbf-8d77-43c3-9db2-6e06eed48352] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:36.635848   13777 system_pods.go:61] "csi-hostpathplugin-8hg5b" [f919643c-2604-4be0-8895-fe335d9c578a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:36.635853   13777 system_pods.go:61] "etcd-addons-306463" [dd177bb5-fe2a-4136-a871-92cd0f322fce] Running
	I0910 17:30:36.635862   13777 system_pods.go:61] "kube-apiserver-addons-306463" [7c3b5014-0b97-43e9-b162-3856dabfa5c1] Running
	I0910 17:30:36.635868   13777 system_pods.go:61] "kube-controller-manager-addons-306463" [bd143d52-b147-4e2b-8221-4b4c215500f8] Running
	I0910 17:30:36.635878   13777 system_pods.go:61] "kube-ingress-dns-minikube" [33998c91-0157-46f1-aa90-c6001166fff3] Running
	I0910 17:30:36.635884   13777 system_pods.go:61] "kube-proxy-js72f" [97604350-aebe-4a6c-b687-0204de19c3f5] Running
	I0910 17:30:36.635890   13777 system_pods.go:61] "kube-scheduler-addons-306463" [6eb6466c-c3d4-4e16-b246-c964865de3f6] Running
	I0910 17:30:36.635900   13777 system_pods.go:61] "metrics-server-84c5f94fbc-q6wcq" [4dc23d17-89f0-47a5-8880-0cf317f8a901] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:36.635909   13777 system_pods.go:61] "nvidia-device-plugin-daemonset-smwnt" [cf2f1df4-c2cd-4ab3-927a-16595a20e831] Running
	I0910 17:30:36.635921   13777 system_pods.go:61] "registry-66c9cd494c-6qxxb" [e9ac504f-2687-4fc9-bc82-285fcdbd1c77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:36.635932   13777 system_pods.go:61] "registry-proxy-dmz6w" [61812c3a-2248-430b-97e8-3b188671e0eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:36.635944   13777 system_pods.go:61] "snapshot-controller-56fcc65765-nnnw7" [5edd6128-e9f7-431b-822d-49f5ef92d0af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.635956   13777 system_pods.go:61] "snapshot-controller-56fcc65765-w9ln4" [1a1094b3-ec64-4401-b8f6-8812fa8ed85d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.635965   13777 system_pods.go:61] "storage-provisioner" [6196330e-c966-44c2-aedd-6dc5e570c6e5] Running
	I0910 17:30:36.635976   13777 system_pods.go:61] "tiller-deploy-b48cc5f79-4jxbr" [1dfb2d44-f679-47b9-8f2d-4d144742e3a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:36.635989   13777 system_pods.go:74] duration metric: took 10.187442ms to wait for pod list to return data ...
	I0910 17:30:36.636002   13777 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:30:36.640110   13777 default_sa.go:45] found service account: "default"
	I0910 17:30:36.640132   13777 default_sa.go:55] duration metric: took 4.119977ms for default service account to be created ...
	I0910 17:30:36.640142   13777 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:30:36.647574   13777 system_pods.go:86] 18 kube-system pods found
	I0910 17:30:36.647597   13777 system_pods.go:89] "coredns-6f6b679f8f-c5qxp" [5ce9784e-e567-4ff5-a7fc-cb8589c471c1] Running
	I0910 17:30:36.647606   13777 system_pods.go:89] "csi-hostpath-attacher-0" [e5afcda1-955a-445a-95b8-dc286510fa6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:36.647612   13777 system_pods.go:89] "csi-hostpath-resizer-0" [5ab24cbf-8d77-43c3-9db2-6e06eed48352] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:36.647620   13777 system_pods.go:89] "csi-hostpathplugin-8hg5b" [f919643c-2604-4be0-8895-fe335d9c578a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:36.647624   13777 system_pods.go:89] "etcd-addons-306463" [dd177bb5-fe2a-4136-a871-92cd0f322fce] Running
	I0910 17:30:36.647629   13777 system_pods.go:89] "kube-apiserver-addons-306463" [7c3b5014-0b97-43e9-b162-3856dabfa5c1] Running
	I0910 17:30:36.647632   13777 system_pods.go:89] "kube-controller-manager-addons-306463" [bd143d52-b147-4e2b-8221-4b4c215500f8] Running
	I0910 17:30:36.647637   13777 system_pods.go:89] "kube-ingress-dns-minikube" [33998c91-0157-46f1-aa90-c6001166fff3] Running
	I0910 17:30:36.647640   13777 system_pods.go:89] "kube-proxy-js72f" [97604350-aebe-4a6c-b687-0204de19c3f5] Running
	I0910 17:30:36.647644   13777 system_pods.go:89] "kube-scheduler-addons-306463" [6eb6466c-c3d4-4e16-b246-c964865de3f6] Running
	I0910 17:30:36.647649   13777 system_pods.go:89] "metrics-server-84c5f94fbc-q6wcq" [4dc23d17-89f0-47a5-8880-0cf317f8a901] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:36.647653   13777 system_pods.go:89] "nvidia-device-plugin-daemonset-smwnt" [cf2f1df4-c2cd-4ab3-927a-16595a20e831] Running
	I0910 17:30:36.647660   13777 system_pods.go:89] "registry-66c9cd494c-6qxxb" [e9ac504f-2687-4fc9-bc82-285fcdbd1c77] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:36.647668   13777 system_pods.go:89] "registry-proxy-dmz6w" [61812c3a-2248-430b-97e8-3b188671e0eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:36.647676   13777 system_pods.go:89] "snapshot-controller-56fcc65765-nnnw7" [5edd6128-e9f7-431b-822d-49f5ef92d0af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.647684   13777 system_pods.go:89] "snapshot-controller-56fcc65765-w9ln4" [1a1094b3-ec64-4401-b8f6-8812fa8ed85d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:36.647688   13777 system_pods.go:89] "storage-provisioner" [6196330e-c966-44c2-aedd-6dc5e570c6e5] Running
	I0910 17:30:36.647693   13777 system_pods.go:89] "tiller-deploy-b48cc5f79-4jxbr" [1dfb2d44-f679-47b9-8f2d-4d144742e3a1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:36.647702   13777 system_pods.go:126] duration metric: took 7.55431ms to wait for k8s-apps to be running ...
	I0910 17:30:36.647708   13777 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:30:36.647747   13777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:30:36.688724   13777 system_svc.go:56] duration metric: took 40.998614ms WaitForService to wait for kubelet
	I0910 17:30:36.688757   13777 kubeadm.go:582] duration metric: took 27.123268565s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:30:36.688785   13777 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:30:36.692318   13777 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:30:36.692341   13777 node_conditions.go:123] node cpu capacity is 2
	I0910 17:30:36.692353   13777 node_conditions.go:105] duration metric: took 3.562021ms to run NodePressure ...
	I0910 17:30:36.692364   13777 start.go:241] waiting for startup goroutines ...
	I0910 17:30:36.769013   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:36.805343   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.807812   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.125928   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.268408   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:37.307358   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.307370   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:37.626450   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.769104   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:37.807631   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.808032   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:38.410369   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:38.410675   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:38.410845   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:38.411724   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.626551   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.772173   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:38.813605   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:38.813975   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:39.126089   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.268594   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:39.306434   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:39.307212   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:39.627575   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.769119   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:39.806793   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:39.806955   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:40.126013   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.269594   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:40.307652   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:40.308116   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:40.626874   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.772237   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:40.809133   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:40.810841   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:41.126532   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.268653   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:41.310669   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:41.310958   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:41.638682   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.769185   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:41.805908   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:41.805996   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.125541   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.274727   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:42.314152   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:42.314527   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.625893   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.769480   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:42.805680   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.812721   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.125909   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.269084   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:43.306576   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.306976   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:43.715505   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.771618   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:43.805941   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.806723   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.124772   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.269280   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:44.306120   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:44.306950   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.625991   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.768665   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:44.805454   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.807495   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.126730   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.269364   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:45.306168   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.306714   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.631613   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.880383   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:45.883658   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.884726   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.127460   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.269296   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:46.306086   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:46.306509   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.625344   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.769098   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:46.806534   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.806996   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.124955   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.268498   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:47.306845   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.307880   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.626319   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.769012   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:47.806321   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.807436   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.125713   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.268906   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:48.306844   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:48.307565   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.626864   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.768630   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:48.805303   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.805947   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.131069   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.269163   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:49.305787   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.305910   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.625678   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.769604   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:49.809587   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.810440   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.125736   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.269191   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:50.306409   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.306739   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.625464   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.768892   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:50.805409   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.806243   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.125616   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.269034   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:51.306610   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.306959   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.625727   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.769169   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:51.806830   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.810306   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.125814   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.270051   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:52.306086   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:52.306192   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.626473   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.768916   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:52.806305   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.806665   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.125899   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.269024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:53.305645   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.307059   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.627179   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.770551   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:53.806405   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.806674   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.126024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.269166   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:54.371393   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.372173   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.625924   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.768277   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:54.806663   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.806832   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.125469   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.268594   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:55.305556   13777 kapi.go:107] duration metric: took 36.503445805s to wait for kubernetes.io/minikube-addons=registry ...
	I0910 17:30:55.313333   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.631573   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.768955   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:55.805802   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.125742   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.270140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:56.305860   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.625644   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.769297   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:56.806369   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.127588   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.270814   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:57.305110   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.625709   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.768903   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:57.805501   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:58.126627   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.269044   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:58.305193   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:58.626293   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.768712   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:58.804911   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.125828   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.269468   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:59.306105   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.625637   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.769614   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:59.807183   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:00.127716   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.270273   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:00.306165   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:00.625737   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.768998   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:00.805477   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.125499   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.269176   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:01.306304   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.626469   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.768732   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:01.805496   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.127553   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.269284   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:02.305980   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.628890   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.768835   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:02.805753   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.126003   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.268927   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:03.306626   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.626444   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.768871   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:03.805456   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.125203   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.268865   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:04.306288   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.627855   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.769364   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:04.806388   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:05.127184   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.275177   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:05.381315   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:05.625844   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.769267   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:05.805825   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.126554   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.268758   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:06.306366   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.627171   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.770092   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:06.806226   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.126711   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.269048   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:07.306150   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.625655   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.768742   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:07.806033   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.126084   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.269282   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:08.305959   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.626832   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.769318   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:08.807491   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:09.126941   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.275226   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:09.308718   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:09.626407   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.769717   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:09.813779   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.125731   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.269355   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:10.309604   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.627981   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.770045   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:10.870554   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.128226   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.268520   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:11.308019   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.626140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.769611   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:11.806272   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.126145   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.269471   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:12.306580   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.644024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.770364   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:12.807268   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.127370   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.271524   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:13.306201   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.626164   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.768629   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:13.805319   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.126256   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.604140   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:14.604741   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.625880   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.769542   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:14.805015   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.129370   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.270705   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:15.306168   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.625569   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.769509   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:15.806404   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.127122   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.268486   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:16.306256   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.627609   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.768807   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:16.805284   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.126777   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:17.273904   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:17.306160   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.626219   13777 kapi.go:107] duration metric: took 58.005179225s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0910 17:31:17.769064   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:17.806337   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.269605   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:18.306821   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.768968   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:18.806084   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.269068   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:19.305883   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.768607   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:19.805388   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.269024   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:20.305384   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.770422   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:20.805852   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.268928   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:21.305819   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.770149   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:21.806244   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.268897   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:22.305737   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.769883   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:22.811948   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.269476   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:23.306255   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.770445   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:23.806935   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.268635   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:24.305750   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.768424   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:24.805735   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.269370   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:25.306913   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.770284   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:25.805807   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.269063   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:26.305656   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.769396   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:26.805876   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.268241   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:27.307415   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.771452   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:27.806295   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.290195   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:28.311170   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.771373   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:28.805752   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.269499   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:29.306013   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.769982   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:29.871116   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.268936   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:30.305384   13777 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.769209   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:30.806494   13777 kapi.go:107] duration metric: took 1m12.005153392s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0910 17:31:31.269701   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:31.769526   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:32.268540   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:32.771389   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:33.272123   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:33.769698   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:34.269894   13777 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:34.769472   13777 kapi.go:107] duration metric: took 1m13.504330818s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0910 17:31:34.770991   13777 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-306463 cluster.
	I0910 17:31:34.772225   13777 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0910 17:31:34.773540   13777 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0910 17:31:34.774682   13777 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0910 17:31:34.775694   13777 addons.go:510] duration metric: took 1m25.210169317s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0910 17:31:34.775727   13777 start.go:246] waiting for cluster config update ...
	I0910 17:31:34.775743   13777 start.go:255] writing updated cluster config ...
	I0910 17:31:34.775953   13777 ssh_runner.go:195] Run: rm -f paused
	I0910 17:31:34.827173   13777 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 17:31:34.828957   13777 out.go:177] * Done! kubectl is now configured to use "addons-306463" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.955235174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cb04067-85c8-471a-83df-305562f6b3a4 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.956132634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1808a5e5-e7c9-4187-b0cc-2a292ac8e803 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.957323337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990297957294775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1808a5e5-e7c9-4187-b0cc-2a292ac8e803 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.957854034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a3236fc-f139-4050-aecb-1275daa30696 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.957984939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a3236fc-f139-4050-aecb-1275daa30696 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.958244844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e14037188ca65c0e588aaf5a8ac39857e019a8ac776ce4caae64d74a2e4b08e4,PodSandboxId:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725990192830046325,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e5863717588980143d4e9ea227a1a055250dd646faf24d6fe1c739f4ef06e4,PodSandboxId:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1725990054572609769,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725989440357118697,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412
993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce56
0a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725989399366385088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725989399317066221,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a3236fc-f139-4050-aecb-1275daa30696 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.974692854Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=19dcca0a-b430-42ab-a39e-80bdadd40366 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.974992872Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-c627d,Uid:3367f866-b502-450a-b09b-d82059477fff,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990191916378388,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:43:11.603920859Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&PodSandboxMetadata{Name:nginx,Uid:7eaa2d0d-141b-494c-aa38-7e6697727bb4,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1725990052227867998,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:40:51.914165753Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d0a670d44a6654b13ef6179772139986549596d2d0ccae8ba6bb61d289cb7eb,Metadata:&PodSandboxMetadata{Name:busybox,Uid:a237baa2-0c28-439f-8fab-71565e2afef5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989495415541473,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a237baa2-0c28-439f-8fab-71565e2afef5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:31:35.101543024Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0831ebcc1f1d7c65d
9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&PodSandboxMetadata{Name:gcp-auth-89d5ffd79-9cff5,Uid:c71f9bb4-5d5d-48be-b1a6-4d832400d952,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989488283918507,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 89d5ffd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:30:21.199549340Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-q6wcq,Uid:4dc23d17-89f0-47a5-8880-0cf317f8a901,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989415643168415,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-se
rver-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:30:15.333362678Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6196330e-c966-44c2-aedd-6dc5e570c6e5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989415225549647,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"lab
els\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-10T17:30:14.606699429Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-c5qxp,Uid:5ce9784e-e567-4ff5-a7fc-cb8589c471c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989410039298540,Labels:map[string]string{io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:30:09.718625078Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&PodSandboxMetadata{Name:kube-proxy-js72f,Uid:97604350-aebe-4a6c-b687-0204de19c3f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989409644803472,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:30:09.305387588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodS
andbox{Id:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-306463,Uid:1009c91d9d6b512577ae300fa67a4ebd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989399167615087,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1009c91d9d6b512577ae300fa67a4ebd,kubernetes.io/config.seen: 2024-09-10T17:29:58.695991064Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-306463,Uid:33ef6519980e55c7294622197c7f614a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989399167130195,Labels:map[string]string{componen
t: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 33ef6519980e55c7294622197c7f614a,kubernetes.io/config.seen: 2024-09-10T17:29:58.695989801Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-306463,Uid:bfa4afb4c8677b28249b35dd2b3e2495,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989399145060473,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-ap
iserver.advertise-address.endpoint: 192.168.39.144:8443,kubernetes.io/config.hash: bfa4afb4c8677b28249b35dd2b3e2495,kubernetes.io/config.seen: 2024-09-10T17:29:58.695988647Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&PodSandboxMetadata{Name:etcd-addons-306463,Uid:2a02b0c0abfab97cfeed0b549a823c12,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725989399144575099,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.144:2379,kubernetes.io/config.hash: 2a02b0c0abfab97cfeed0b549a823c12,kubernetes.io/config.seen: 2024-09-10T17:29:58.695985659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector
/interceptors.go:74" id=19dcca0a-b430-42ab-a39e-80bdadd40366 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.975977596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf05fb08-5011-4e68-bcc3-3e1a7e51fddc name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.976083291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf05fb08-5011-4e68-bcc3-3e1a7e51fddc name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.976314214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e14037188ca65c0e588aaf5a8ac39857e019a8ac776ce4caae64d74a2e4b08e4,PodSandboxId:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725990192830046325,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e5863717588980143d4e9ea227a1a055250dd646faf24d6fe1c739f4ef06e4,PodSandboxId:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1725990054572609769,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725989440357118697,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412
993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce56
0a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725989399366385088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725989399317066221,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf05fb08-5011-4e68-bcc3-3e1a7e51fddc name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.993649821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=286acc67-a6d0-4266-b995-c08d08efde14 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.993725108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=286acc67-a6d0-4266-b995-c08d08efde14 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.995033872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=689a0ad1-9fa4-4b9b-aaea-6f32e5e35d84 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.996233183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990297996211232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=689a0ad1-9fa4-4b9b-aaea-6f32e5e35d84 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.997074998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21f4b68e-ca9a-4ba7-a72a-1626c407203f name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.997301629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21f4b68e-ca9a-4ba7-a72a-1626c407203f name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:57 addons-306463 crio[672]: time="2024-09-10 17:44:57.997588444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e14037188ca65c0e588aaf5a8ac39857e019a8ac776ce4caae64d74a2e4b08e4,PodSandboxId:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725990192830046325,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e5863717588980143d4e9ea227a1a055250dd646faf24d6fe1c739f4ef06e4,PodSandboxId:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1725990054572609769,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725989440357118697,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412
993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce56
0a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725989399366385088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725989399317066221,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21f4b68e-ca9a-4ba7-a72a-1626c407203f name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:58 addons-306463 crio[672]: time="2024-09-10 17:44:58.038842598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0eae5197-a6f5-4b6f-970e-68f1d06a0ef3 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:44:58 addons-306463 crio[672]: time="2024-09-10 17:44:58.038972973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0eae5197-a6f5-4b6f-970e-68f1d06a0ef3 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:44:58 addons-306463 crio[672]: time="2024-09-10 17:44:58.040091540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b2ebcf0-4967-4c45-957b-4bb753b7f086 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:44:58 addons-306463 crio[672]: time="2024-09-10 17:44:58.041418522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990298041395176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b2ebcf0-4967-4c45-957b-4bb753b7f086 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:44:58 addons-306463 crio[672]: time="2024-09-10 17:44:58.042104421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a8e70b6-553c-4ecb-a1e4-862e197b6864 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:58 addons-306463 crio[672]: time="2024-09-10 17:44:58.042177220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a8e70b6-553c-4ecb-a1e4-862e197b6864 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:44:58 addons-306463 crio[672]: time="2024-09-10 17:44:58.042411417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e14037188ca65c0e588aaf5a8ac39857e019a8ac776ce4caae64d74a2e4b08e4,PodSandboxId:3cb988515a1408701ee5ded3dd0c31736095b04056337ad3ab85a206fe901aff,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725990192830046325,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-c627d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3367f866-b502-450a-b09b-d82059477fff,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45e5863717588980143d4e9ea227a1a055250dd646faf24d6fe1c739f4ef06e4,PodSandboxId:5b855cd6ea777fcbc69b131322f184d91c203f218051c1eff725089aef1a8895,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1725990054572609769,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7eaa2d0d-141b-494c-aa38-7e6697727bb4,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf,PodSandboxId:d0831ebcc1f1d7c65d9218dea5fa54b7beacaf1a8142f2630e7f0b73140fb9b1,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725989493343593602,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9cff5,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c71f9bb4-5d5d-48be-b1a6-4d832400d952,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0270fff871840c9a822cf5b0f515d34827f1d9fed151a0f2d5082fa11dd27b,PodSandboxId:bf5609ea0b023af2c716c42016528abbd115cf45498ea88cc431983aea60eb64,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725989440357118697,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6wcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc23d17-89f0-47a5-8880-0cf317f8a901,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf,PodSandboxId:f3d0ecd016c61169dce7badc41eb39f61e6cb0d229dd82f9df2eaa408100403d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1725989416147478719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6196330e-c966-44c2-aedd-6dc5e570c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b,PodSandboxId:a8d7383a3c4c8c7c4d4b4b82ad25e06f4d8654846ce6d88ed03d348a31c77acf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725989412
993871459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5qxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce9784e-e567-4ff5-a7fc-cb8589c471c1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975,PodSandboxId:8987d0bb394a57cd9a58f92728311e29e63b4b78f29249c4fc624a2e49afcc09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce56
0a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725989410338539077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js72f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97604350-aebe-4a6c-b687-0204de19c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495,PodSandboxId:3e898142a158834481c7a1d8ff69ecd76325a6c790734ed640797010ed8e7649,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725989399351056533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1009c91d9d6b512577ae300fa67a4ebd,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54,PodSandboxId:bff13732bced4bb24a37b7520e34f6c69fd42234a9343b0a1807cb796c72700a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725989399366385088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef6519980e55c7294622197c7f614a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b,PodSandboxId:bdfc49df82eed18f6ae46cecdb49805465a79a0b7da6ba3dc74ffb7bd3ee5038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725989399317066221,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa4afb4c8677b28249b35dd2b3e2495,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c,PodSandboxId:636a4a297aa530f5e18e30cd561b223da7ddada95440c3fce95b8b691604e464,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725989399332095776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-306463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a02b0c0abfab97cfeed0b549a823c12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a8e70b6-553c-4ecb-a1e4-862e197b6864 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e14037188ca65       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   3cb988515a140       hello-world-app-55bf9c44b4-c627d
	45e5863717588       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago        Running             nginx                     0                   5b855cd6ea777       nginx
	582aef687e6f1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago       Running             gcp-auth                  0                   d0831ebcc1f1d       gcp-auth-89d5ffd79-9cff5
	9e0270fff8718       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago       Running             metrics-server            0                   bf5609ea0b023       metrics-server-84c5f94fbc-q6wcq
	bc2884c8e7918       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago       Running             storage-provisioner       0                   f3d0ecd016c61       storage-provisioner
	0a215f27453dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        14 minutes ago       Running             coredns                   0                   a8d7383a3c4c8       coredns-6f6b679f8f-c5qxp
	3a73d39390d5a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        14 minutes ago       Running             kube-proxy                0                   8987d0bb394a5       kube-proxy-js72f
	1b2fd106868bc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        14 minutes ago       Running             kube-controller-manager   0                   bff13732bced4       kube-controller-manager-addons-306463
	f698d8d7966b0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        14 minutes ago       Running             kube-scheduler            0                   3e898142a1588       kube-scheduler-addons-306463
	9820f2fa1dd2a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        14 minutes ago       Running             etcd                      0                   636a4a297aa53       etcd-addons-306463
	a702e238565e0       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        14 minutes ago       Running             kube-apiserver            0                   bdfc49df82eed       kube-apiserver-addons-306463
	
	
	==> coredns [0a215f27453dd39d7db9f39e41ec7e4ac2e49500ff6c716a9bfafc824954266b] <==
	[INFO] 127.0.0.1:46294 - 34342 "HINFO IN 2988755105619345519.8178505747039127944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010883316s
	[INFO] 10.244.0.7:51528 - 9833 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000522483s
	[INFO] 10.244.0.7:51528 - 52590 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000287548s
	[INFO] 10.244.0.7:49547 - 11105 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000080302s
	[INFO] 10.244.0.7:49547 - 54119 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038918s
	[INFO] 10.244.0.7:51045 - 63866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058098s
	[INFO] 10.244.0.7:51045 - 57464 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006068s
	[INFO] 10.244.0.7:48884 - 49406 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072191s
	[INFO] 10.244.0.7:48884 - 18943 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044584s
	[INFO] 10.244.0.7:48605 - 63647 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000058049s
	[INFO] 10.244.0.7:48605 - 26013 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00010894s
	[INFO] 10.244.0.7:53898 - 7835 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034622s
	[INFO] 10.244.0.7:53898 - 30617 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003803s
	[INFO] 10.244.0.7:41577 - 5082 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072855s
	[INFO] 10.244.0.7:41577 - 14808 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000127251s
	[INFO] 10.244.0.7:35153 - 44630 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000117348s
	[INFO] 10.244.0.7:35153 - 21591 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061476s
	[INFO] 10.244.0.22:53652 - 52736 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000525847s
	[INFO] 10.244.0.22:51909 - 33747 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000080647s
	[INFO] 10.244.0.22:59992 - 15038 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160421s
	[INFO] 10.244.0.22:50214 - 27016 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000071597s
	[INFO] 10.244.0.22:58245 - 14301 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127195s
	[INFO] 10.244.0.22:46404 - 10714 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079794s
	[INFO] 10.244.0.22:37437 - 16123 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001244875s
	[INFO] 10.244.0.22:55509 - 30140 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001661686s
	
	
	==> describe nodes <==
	Name:               addons-306463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-306463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=addons-306463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_30_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-306463
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:30:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-306463
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:44:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:43:40 +0000   Tue, 10 Sep 2024 17:30:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:43:40 +0000   Tue, 10 Sep 2024 17:30:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:43:40 +0000   Tue, 10 Sep 2024 17:30:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:43:40 +0000   Tue, 10 Sep 2024 17:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    addons-306463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd3fd5b0d8a84e1595be7f0c7913d0fd
	  System UUID:                dd3fd5b0-d8a8-4e15-95be-7f0c7913d0fd
	  Boot ID:                    41ce101e-c89c-4773-988f-9e0f2e4ee815
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-c627d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  gcp-auth                    gcp-auth-89d5ffd79-9cff5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-6f6b679f8f-c5qxp                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-306463                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-306463             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-306463    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-js72f                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-306463             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-306463 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-306463 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-306463 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m   kubelet          Node addons-306463 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node addons-306463 event: Registered Node addons-306463 in Controller
	
	
	==> dmesg <==
	[  +5.181145] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.627457] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.626250] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:31] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.055474] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.112841] kauditd_printk_skb: 31 callbacks suppressed
	[ +13.239386] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.062688] kauditd_printk_skb: 49 callbacks suppressed
	[  +9.206322] kauditd_printk_skb: 9 callbacks suppressed
	[Sep10 17:32] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:34] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:39] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.622351] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.013540] kauditd_printk_skb: 39 callbacks suppressed
	[Sep10 17:40] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.890154] kauditd_printk_skb: 20 callbacks suppressed
	[ +15.600296] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.244502] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.735054] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.942009] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.692638] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.461030] kauditd_printk_skb: 36 callbacks suppressed
	[Sep10 17:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.497989] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9820f2fa1dd2a81346b7b004efde785c85e92df60bded4a7237f9c20a0de805c] <==
	{"level":"warn","ts":"2024-09-10T17:30:45.866761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.415869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:30:45.866856Z","caller":"traceutil/trace.go:171","msg":"trace[1068047252] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:927; }","duration":"110.525419ms","start":"2024-09-10T17:30:45.756319Z","end":"2024-09-10T17:30:45.866845Z","steps":["trace[1068047252] 'range keys from in-memory index tree'  (duration: 110.294ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:30:45.867044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.860548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-10T17:30:45.867096Z","caller":"traceutil/trace.go:171","msg":"trace[1809753364] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:927; }","duration":"102.917852ms","start":"2024-09-10T17:30:45.764169Z","end":"2024-09-10T17:30:45.867087Z","steps":["trace[1809753364] 'range keys from in-memory index tree'  (duration: 102.76827ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:30:52.059608Z","caller":"traceutil/trace.go:171","msg":"trace[1298428410] transaction","detail":"{read_only:false; response_revision:937; number_of_response:1; }","duration":"112.30505ms","start":"2024-09-10T17:30:51.947283Z","end":"2024-09-10T17:30:52.059589Z","steps":["trace[1298428410] 'process raft request'  (duration: 112.162762ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:30:53.611772Z","caller":"traceutil/trace.go:171","msg":"trace[1258985527] linearizableReadLoop","detail":"{readStateIndex:964; appliedIndex:963; }","duration":"178.702934ms","start":"2024-09-10T17:30:53.433055Z","end":"2024-09-10T17:30:53.611758Z","steps":["trace[1258985527] 'read index received'  (duration: 178.578616ms)","trace[1258985527] 'applied index is now lower than readState.Index'  (duration: 123.822µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T17:30:53.611866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.792771ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:30:53.611943Z","caller":"traceutil/trace.go:171","msg":"trace[1456752792] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:938; }","duration":"178.885873ms","start":"2024-09-10T17:30:53.433052Z","end":"2024-09-10T17:30:53.611937Z","steps":["trace[1456752792] 'agreement among raft nodes before linearized reading'  (duration: 178.78055ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:31:14.585857Z","caller":"traceutil/trace.go:171","msg":"trace[1736615180] linearizableReadLoop","detail":"{readStateIndex:1118; appliedIndex:1117; }","duration":"331.383713ms","start":"2024-09-10T17:31:14.254456Z","end":"2024-09-10T17:31:14.585840Z","steps":["trace[1736615180] 'read index received'  (duration: 331.171762ms)","trace[1736615180] 'applied index is now lower than readState.Index'  (duration: 211.53µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T17:31:14.585995Z","caller":"traceutil/trace.go:171","msg":"trace[616425486] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"377.322054ms","start":"2024-09-10T17:31:14.208667Z","end":"2024-09-10T17:31:14.585989Z","steps":["trace[616425486] 'process raft request'  (duration: 377.062724ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.586082Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:31:14.208652Z","time spent":"377.361583ms","remote":"127.0.0.1:39804","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1074 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-10T17:31:14.586243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.149848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:14.586321Z","caller":"traceutil/trace.go:171","msg":"trace[1902664727] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1087; }","duration":"295.232349ms","start":"2024-09-10T17:31:14.291079Z","end":"2024-09-10T17:31:14.586312Z","steps":["trace[1902664727] 'agreement among raft nodes before linearized reading'  (duration: 295.125426ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.586354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.675068ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:14.586388Z","caller":"traceutil/trace.go:171","msg":"trace[681843452] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1087; }","duration":"153.706065ms","start":"2024-09-10T17:31:14.432677Z","end":"2024-09-10T17:31:14.586383Z","steps":["trace[681843452] 'agreement among raft nodes before linearized reading'  (duration: 153.67064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.586334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.898535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:14.587217Z","caller":"traceutil/trace.go:171","msg":"trace[59462955] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1087; }","duration":"332.778889ms","start":"2024-09-10T17:31:14.254428Z","end":"2024-09-10T17:31:14.587207Z","steps":["trace[59462955] 'agreement among raft nodes before linearized reading'  (duration: 331.885636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:14.587550Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T17:31:14.254397Z","time spent":"333.142093ms","remote":"127.0.0.1:39820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-10T17:31:25.693826Z","caller":"traceutil/trace.go:171","msg":"trace[916338974] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"175.709853ms","start":"2024-09-10T17:31:25.518097Z","end":"2024-09-10T17:31:25.693806Z","steps":["trace[916338974] 'process raft request'  (duration: 175.242522ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:31:28.273694Z","caller":"traceutil/trace.go:171","msg":"trace[326156197] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"145.50673ms","start":"2024-09-10T17:31:28.128165Z","end":"2024-09-10T17:31:28.273671Z","steps":["trace[326156197] 'process raft request'  (duration: 145.173512ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:31:33.252659Z","caller":"traceutil/trace.go:171","msg":"trace[803236101] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"168.076485ms","start":"2024-09-10T17:31:33.084566Z","end":"2024-09-10T17:31:33.252643Z","steps":["trace[803236101] 'process raft request'  (duration: 167.526703ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:39:54.247760Z","caller":"traceutil/trace.go:171","msg":"trace[1408959823] transaction","detail":"{read_only:false; response_revision:2000; number_of_response:1; }","duration":"120.176741ms","start":"2024-09-10T17:39:54.127561Z","end":"2024-09-10T17:39:54.247737Z","steps":["trace[1408959823] 'process raft request'  (duration: 120.058138ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:40:00.350559Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1527}
	{"level":"info","ts":"2024-09-10T17:40:00.394567Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1527,"took":"43.479842ms","hash":4077854701,"current-db-size-bytes":6705152,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3575808,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-10T17:40:00.394619Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4077854701,"revision":1527,"compact-revision":-1}
	
	
	==> gcp-auth [582aef687e6f17f3c2a84515f32a26653e60025be2de8edfede124d4524777cf] <==
	2024/09/10 17:31:35 Ready to write response ...
	2024/09/10 17:39:48 Ready to marshal response ...
	2024/09/10 17:39:48 Ready to write response ...
	2024/09/10 17:39:48 Ready to marshal response ...
	2024/09/10 17:39:48 Ready to write response ...
	2024/09/10 17:39:54 Ready to marshal response ...
	2024/09/10 17:39:54 Ready to write response ...
	2024/09/10 17:39:59 Ready to marshal response ...
	2024/09/10 17:39:59 Ready to write response ...
	2024/09/10 17:39:59 Ready to marshal response ...
	2024/09/10 17:39:59 Ready to write response ...
	2024/09/10 17:40:08 Ready to marshal response ...
	2024/09/10 17:40:08 Ready to write response ...
	2024/09/10 17:40:19 Ready to marshal response ...
	2024/09/10 17:40:19 Ready to write response ...
	2024/09/10 17:40:51 Ready to marshal response ...
	2024/09/10 17:40:51 Ready to write response ...
	2024/09/10 17:40:55 Ready to marshal response ...
	2024/09/10 17:40:55 Ready to write response ...
	2024/09/10 17:40:55 Ready to marshal response ...
	2024/09/10 17:40:55 Ready to write response ...
	2024/09/10 17:40:55 Ready to marshal response ...
	2024/09/10 17:40:55 Ready to write response ...
	2024/09/10 17:43:11 Ready to marshal response ...
	2024/09/10 17:43:11 Ready to write response ...
	
	
	==> kernel <==
	 17:44:58 up 15 min,  0 users,  load average: 0.25, 0.42, 0.41
	Linux addons-306463 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a702e238565e00637134e10aceb4b0a000411fb06af5a78bdce0a089d6343b7b] <==
	W0910 17:39:45.189265       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0910 17:40:02.002508       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0910 17:40:09.586498       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:09.594344       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:09.601249       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:24.601459       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0910 17:40:34.932458       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:34.932521       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:40:34.975423       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:34.975477       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:40:34.992396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:34.992451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:40:35.118983       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:40:35.119164       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0910 17:40:36.120579       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0910 17:40:36.126608       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0910 17:40:37.403724       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:40:38.410312       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0910 17:40:51.772832       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0910 17:40:51.949953       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.46.35"}
	I0910 17:40:55.053150       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.16.137"}
	I0910 17:43:11.755281       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.191.186"}
	E0910 17:43:14.307458       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0910 17:43:16.969424       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0910 17:43:16.975329       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [1b2fd106868bc397fe25e2b6527f1b60b5821fe72928318b4f27a3432e9cea54] <==
	W0910 17:43:17.234131       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:17.234259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:43:22.813715       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:22.813851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:43:24.292669       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0910 17:43:26.259238       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:26.259300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:43:40.242479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-306463"
	W0910 17:43:44.182428       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:44.182595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:43:56.565637       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:56.565986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:43:59.307501       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:59.307557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:44:04.409411       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:44:04.409470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:44:39.526296       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:44:39.526468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:44:43.017745       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:44:43.017834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:44:53.921451       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:44:53.921509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:44:54.032772       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:44:54.032810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:44:57.029438       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.579µs"
	
	
	==> kube-proxy [3a73d39390d5ae35c72fc26b22cf8e0830151755c0757d4db86635b5c22b5975] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:30:10.959254       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:30:10.977328       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.144"]
	E0910 17:30:10.977427       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:30:11.055345       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:30:11.055408       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:30:11.055442       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:30:11.058990       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:30:11.059418       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:30:11.059455       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:30:11.061020       1 config.go:197] "Starting service config controller"
	I0910 17:30:11.061045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:30:11.061068       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:30:11.061072       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:30:11.061523       1 config.go:326] "Starting node config controller"
	I0910 17:30:11.061530       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:30:11.161679       1 shared_informer.go:320] Caches are synced for node config
	I0910 17:30:11.161709       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:30:11.161736       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f698d8d7966b01930ddbc6587bb08f04f7e2f8adad63b8a3e9887aed729b9495] <==
	W0910 17:30:01.806509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:30:01.806539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.806590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:30:01.806622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.806670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 17:30:01.806700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.806755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:01.806784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.810146       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 17:30:01.811967       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 17:30:02.656866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 17:30:02.656998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:02.852652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:30:02.852741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:02.914536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 17:30:02.914590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:02.973206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 17:30:02.973257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:03.010457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:30:03.010597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:03.040102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:03.040268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:03.048988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 17:30:03.049072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 17:30:03.383329       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 17:44:04 addons-306463 kubelet[1220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 17:44:04 addons-306463 kubelet[1220]: E0910 17:44:04.473620    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990244473210214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:04 addons-306463 kubelet[1220]: E0910 17:44:04.473643    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990244473210214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:05 addons-306463 kubelet[1220]: I0910 17:44:05.729230    1220 scope.go:117] "RemoveContainer" containerID="b0dcc0b067c1fe33cab5925ccf93236b0b4235680f7460bc114b84a50691c3a5"
	Sep 10 17:44:05 addons-306463 kubelet[1220]: I0910 17:44:05.751838    1220 scope.go:117] "RemoveContainer" containerID="4919ba67c923ab8d43533c369b87ef4ced592fbe0c0deb116fbbb857ebf533ae"
	Sep 10 17:44:11 addons-306463 kubelet[1220]: E0910 17:44:11.121185    1220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a237baa2-0c28-439f-8fab-71565e2afef5"
	Sep 10 17:44:14 addons-306463 kubelet[1220]: E0910 17:44:14.476132    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990254475718201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:14 addons-306463 kubelet[1220]: E0910 17:44:14.476170    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990254475718201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:22 addons-306463 kubelet[1220]: E0910 17:44:22.122437    1220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a237baa2-0c28-439f-8fab-71565e2afef5"
	Sep 10 17:44:24 addons-306463 kubelet[1220]: E0910 17:44:24.478669    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990264478316595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:24 addons-306463 kubelet[1220]: E0910 17:44:24.478709    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990264478316595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:34 addons-306463 kubelet[1220]: E0910 17:44:34.484397    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990274482224655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:34 addons-306463 kubelet[1220]: E0910 17:44:34.484701    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990274482224655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:37 addons-306463 kubelet[1220]: E0910 17:44:37.121179    1220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a237baa2-0c28-439f-8fab-71565e2afef5"
	Sep 10 17:44:44 addons-306463 kubelet[1220]: E0910 17:44:44.487263    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990284486739042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:44 addons-306463 kubelet[1220]: E0910 17:44:44.487566    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990284486739042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:49 addons-306463 kubelet[1220]: E0910 17:44:49.121502    1220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a237baa2-0c28-439f-8fab-71565e2afef5"
	Sep 10 17:44:54 addons-306463 kubelet[1220]: E0910 17:44:54.490451    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990294489932839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:54 addons-306463 kubelet[1220]: E0910 17:44:54.490735    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990294489932839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579739,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:44:58 addons-306463 kubelet[1220]: I0910 17:44:58.404505    1220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4dc23d17-89f0-47a5-8880-0cf317f8a901-tmp-dir\") pod \"4dc23d17-89f0-47a5-8880-0cf317f8a901\" (UID: \"4dc23d17-89f0-47a5-8880-0cf317f8a901\") "
	Sep 10 17:44:58 addons-306463 kubelet[1220]: I0910 17:44:58.404578    1220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7pnm\" (UniqueName: \"kubernetes.io/projected/4dc23d17-89f0-47a5-8880-0cf317f8a901-kube-api-access-x7pnm\") pod \"4dc23d17-89f0-47a5-8880-0cf317f8a901\" (UID: \"4dc23d17-89f0-47a5-8880-0cf317f8a901\") "
	Sep 10 17:44:58 addons-306463 kubelet[1220]: I0910 17:44:58.405380    1220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dc23d17-89f0-47a5-8880-0cf317f8a901-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "4dc23d17-89f0-47a5-8880-0cf317f8a901" (UID: "4dc23d17-89f0-47a5-8880-0cf317f8a901"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 10 17:44:58 addons-306463 kubelet[1220]: I0910 17:44:58.415419    1220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc23d17-89f0-47a5-8880-0cf317f8a901-kube-api-access-x7pnm" (OuterVolumeSpecName: "kube-api-access-x7pnm") pod "4dc23d17-89f0-47a5-8880-0cf317f8a901" (UID: "4dc23d17-89f0-47a5-8880-0cf317f8a901"). InnerVolumeSpecName "kube-api-access-x7pnm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:44:58 addons-306463 kubelet[1220]: I0910 17:44:58.505123    1220 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x7pnm\" (UniqueName: \"kubernetes.io/projected/4dc23d17-89f0-47a5-8880-0cf317f8a901-kube-api-access-x7pnm\") on node \"addons-306463\" DevicePath \"\""
	Sep 10 17:44:58 addons-306463 kubelet[1220]: I0910 17:44:58.505154    1220 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4dc23d17-89f0-47a5-8880-0cf317f8a901-tmp-dir\") on node \"addons-306463\" DevicePath \"\""
	
	
	==> storage-provisioner [bc2884c8e7918f3f66e43acc12fa41b52b90eaae0693407161bbabf4a79747bf] <==
	I0910 17:30:16.804855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:30:16.824584       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:30:16.824662       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 17:30:16.842816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 17:30:16.866442       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-306463_b8efc0b6-d765-4a34-ad76-23431d9ccb05!
	I0910 17:30:16.866012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"337c439a-f46b-493b-9e06-ad4421b197f3", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-306463_b8efc0b6-d765-4a34-ad76-23431d9ccb05 became leader
	I0910 17:30:16.971090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-306463_b8efc0b6-d765-4a34-ad76-23431d9ccb05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-306463 -n addons-306463
helpers_test.go:261: (dbg) Run:  kubectl --context addons-306463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-306463 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-306463 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-306463/192.168.39.144
	Start Time:       Tue, 10 Sep 2024 17:31:35 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7msjq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7msjq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-306463
	  Normal   Pulling    11m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m12s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (321.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 node stop m02 -v=7 --alsologtostderr
E0910 17:54:01.670294   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:54:06.792609   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:54:17.033981   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:54:37.515311   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:55:18.477243   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.456704855s)

                                                
                                                
-- stdout --
	* Stopping node "ha-558946-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:53:59.891962   28441 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:53:59.892220   28441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:53:59.892229   28441 out.go:358] Setting ErrFile to fd 2...
	I0910 17:53:59.892233   28441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:53:59.892430   28441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:53:59.892680   28441 mustload.go:65] Loading cluster: ha-558946
	I0910 17:53:59.893106   28441 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:53:59.893124   28441 stop.go:39] StopHost: ha-558946-m02
	I0910 17:53:59.893552   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:53:59.893595   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:59.909969   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39935
	I0910 17:53:59.910384   28441 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:59.910928   28441 main.go:141] libmachine: Using API Version  1
	I0910 17:53:59.910948   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:59.911320   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:59.913679   28441 out.go:177] * Stopping node "ha-558946-m02"  ...
	I0910 17:53:59.914998   28441 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0910 17:53:59.915036   28441 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:53:59.915266   28441 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0910 17:53:59.915294   28441 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:53:59.918028   28441 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:53:59.918439   28441 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:53:59.918470   28441 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:53:59.918606   28441 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:53:59.918775   28441 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:53:59.918906   28441 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:53:59.919030   28441 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 17:54:00.008984   28441 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0910 17:54:00.063101   28441 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0910 17:54:00.118163   28441 main.go:141] libmachine: Stopping "ha-558946-m02"...
	I0910 17:54:00.118187   28441 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:54:00.120070   28441 main.go:141] libmachine: (ha-558946-m02) Calling .Stop
	I0910 17:54:00.123604   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 0/120
	I0910 17:54:01.124772   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 1/120
	I0910 17:54:02.126088   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 2/120
	I0910 17:54:03.128017   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 3/120
	I0910 17:54:04.129137   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 4/120
	I0910 17:54:05.130269   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 5/120
	I0910 17:54:06.131604   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 6/120
	I0910 17:54:07.132792   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 7/120
	I0910 17:54:08.134063   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 8/120
	I0910 17:54:09.135714   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 9/120
	I0910 17:54:10.138032   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 10/120
	I0910 17:54:11.139633   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 11/120
	I0910 17:54:12.140857   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 12/120
	I0910 17:54:13.142100   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 13/120
	I0910 17:54:14.143545   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 14/120
	I0910 17:54:15.145331   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 15/120
	I0910 17:54:16.146757   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 16/120
	I0910 17:54:17.148582   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 17/120
	I0910 17:54:18.149938   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 18/120
	I0910 17:54:19.151271   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 19/120
	I0910 17:54:20.153506   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 20/120
	I0910 17:54:21.155617   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 21/120
	I0910 17:54:22.156953   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 22/120
	I0910 17:54:23.158400   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 23/120
	I0910 17:54:24.160343   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 24/120
	I0910 17:54:25.162127   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 25/120
	I0910 17:54:26.163590   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 26/120
	I0910 17:54:27.164982   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 27/120
	I0910 17:54:28.166449   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 28/120
	I0910 17:54:29.168083   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 29/120
	I0910 17:54:30.170199   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 30/120
	I0910 17:54:31.171617   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 31/120
	I0910 17:54:32.172836   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 32/120
	I0910 17:54:33.174072   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 33/120
	I0910 17:54:34.175358   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 34/120
	I0910 17:54:35.177266   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 35/120
	I0910 17:54:36.179552   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 36/120
	I0910 17:54:37.180703   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 37/120
	I0910 17:54:38.182050   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 38/120
	I0910 17:54:39.183597   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 39/120
	I0910 17:54:40.185413   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 40/120
	I0910 17:54:41.187377   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 41/120
	I0910 17:54:42.188620   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 42/120
	I0910 17:54:43.189926   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 43/120
	I0910 17:54:44.192020   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 44/120
	I0910 17:54:45.193883   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 45/120
	I0910 17:54:46.195168   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 46/120
	I0910 17:54:47.196325   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 47/120
	I0910 17:54:48.197718   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 48/120
	I0910 17:54:49.199652   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 49/120
	I0910 17:54:50.201876   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 50/120
	I0910 17:54:51.203595   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 51/120
	I0910 17:54:52.204857   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 52/120
	I0910 17:54:53.206200   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 53/120
	I0910 17:54:54.207453   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 54/120
	I0910 17:54:55.209453   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 55/120
	I0910 17:54:56.211844   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 56/120
	I0910 17:54:57.213144   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 57/120
	I0910 17:54:58.214370   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 58/120
	I0910 17:54:59.215793   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 59/120
	I0910 17:55:00.217761   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 60/120
	I0910 17:55:01.219364   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 61/120
	I0910 17:55:02.220683   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 62/120
	I0910 17:55:03.222015   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 63/120
	I0910 17:55:04.223411   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 64/120
	I0910 17:55:05.225360   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 65/120
	I0910 17:55:06.227358   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 66/120
	I0910 17:55:07.228663   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 67/120
	I0910 17:55:08.229842   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 68/120
	I0910 17:55:09.230963   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 69/120
	I0910 17:55:10.232846   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 70/120
	I0910 17:55:11.234153   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 71/120
	I0910 17:55:12.235241   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 72/120
	I0910 17:55:13.236548   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 73/120
	I0910 17:55:14.237813   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 74/120
	I0910 17:55:15.239671   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 75/120
	I0910 17:55:16.240934   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 76/120
	I0910 17:55:17.242465   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 77/120
	I0910 17:55:18.243725   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 78/120
	I0910 17:55:19.245119   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 79/120
	I0910 17:55:20.247007   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 80/120
	I0910 17:55:21.248374   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 81/120
	I0910 17:55:22.249549   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 82/120
	I0910 17:55:23.251539   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 83/120
	I0910 17:55:24.252805   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 84/120
	I0910 17:55:25.254697   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 85/120
	I0910 17:55:26.256930   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 86/120
	I0910 17:55:27.258324   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 87/120
	I0910 17:55:28.259767   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 88/120
	I0910 17:55:29.261248   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 89/120
	I0910 17:55:30.263230   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 90/120
	I0910 17:55:31.264814   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 91/120
	I0910 17:55:32.266131   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 92/120
	I0910 17:55:33.268312   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 93/120
	I0910 17:55:34.269964   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 94/120
	I0910 17:55:35.271626   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 95/120
	I0910 17:55:36.272875   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 96/120
	I0910 17:55:37.275031   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 97/120
	I0910 17:55:38.276484   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 98/120
	I0910 17:55:39.277687   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 99/120
	I0910 17:55:40.279374   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 100/120
	I0910 17:55:41.280533   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 101/120
	I0910 17:55:42.281853   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 102/120
	I0910 17:55:43.283258   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 103/120
	I0910 17:55:44.284645   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 104/120
	I0910 17:55:45.285988   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 105/120
	I0910 17:55:46.287404   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 106/120
	I0910 17:55:47.288621   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 107/120
	I0910 17:55:48.290016   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 108/120
	I0910 17:55:49.291338   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 109/120
	I0910 17:55:50.293429   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 110/120
	I0910 17:55:51.295443   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 111/120
	I0910 17:55:52.296626   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 112/120
	I0910 17:55:53.297770   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 113/120
	I0910 17:55:54.299049   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 114/120
	I0910 17:55:55.301000   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 115/120
	I0910 17:55:56.302173   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 116/120
	I0910 17:55:57.303346   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 117/120
	I0910 17:55:58.305447   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 118/120
	I0910 17:55:59.307385   28441 main.go:141] libmachine: (ha-558946-m02) Waiting for machine to stop 119/120
	I0910 17:56:00.308649   28441 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0910 17:56:00.308805   28441 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-558946 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr: exit status 3 (19.221407801s)

                                                
                                                
-- stdout --
	ha-558946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-558946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:56:00.351835   28861 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:56:00.352060   28861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:00.352068   28861 out.go:358] Setting ErrFile to fd 2...
	I0910 17:56:00.352072   28861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:00.352261   28861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:56:00.352429   28861 out.go:352] Setting JSON to false
	I0910 17:56:00.352454   28861 mustload.go:65] Loading cluster: ha-558946
	I0910 17:56:00.352568   28861 notify.go:220] Checking for updates...
	I0910 17:56:00.352803   28861 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:56:00.352816   28861 status.go:255] checking status of ha-558946 ...
	I0910 17:56:00.353212   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:00.353278   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:00.373288   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0910 17:56:00.373698   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:00.374229   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:00.374264   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:00.374638   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:00.374838   28861 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:56:00.376492   28861 status.go:330] ha-558946 host status = "Running" (err=<nil>)
	I0910 17:56:00.376507   28861 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:00.376784   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:00.376818   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:00.391558   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35147
	I0910 17:56:00.391957   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:00.392384   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:00.392405   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:00.392707   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:00.392859   28861 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:56:00.395707   28861 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:00.396133   28861 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:00.396169   28861 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:00.396316   28861 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:00.396684   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:00.396733   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:00.412344   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0910 17:56:00.412821   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:00.413327   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:00.413344   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:00.413786   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:00.413966   28861 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:56:00.414143   28861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:00.414176   28861 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:56:00.416962   28861 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:00.417387   28861 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:00.417417   28861 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:00.417581   28861 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:56:00.417726   28861 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:56:00.417860   28861 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:56:00.418004   28861 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:56:00.502793   28861 ssh_runner.go:195] Run: systemctl --version
	I0910 17:56:00.510577   28861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:00.526358   28861 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:00.526387   28861 api_server.go:166] Checking apiserver status ...
	I0910 17:56:00.526419   28861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:00.541758   28861 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup
	W0910 17:56:00.550865   28861 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:00.550908   28861 ssh_runner.go:195] Run: ls
	I0910 17:56:00.555086   28861 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:00.560893   28861 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:00.560915   28861 status.go:422] ha-558946 apiserver status = Running (err=<nil>)
	I0910 17:56:00.560927   28861 status.go:257] ha-558946 status: &{Name:ha-558946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:00.560951   28861 status.go:255] checking status of ha-558946-m02 ...
	I0910 17:56:00.561309   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:00.561362   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:00.576434   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45553
	I0910 17:56:00.576809   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:00.577197   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:00.577211   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:00.577479   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:00.577639   28861 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:56:00.579139   28861 status.go:330] ha-558946-m02 host status = "Running" (err=<nil>)
	I0910 17:56:00.579155   28861 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:00.579436   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:00.579464   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:00.593741   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0910 17:56:00.594083   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:00.594461   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:00.594477   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:00.594742   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:00.594895   28861 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:56:00.597246   28861 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:00.597610   28861 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:00.597636   28861 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:00.597722   28861 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:00.598102   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:00.598139   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:00.612362   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0910 17:56:00.612785   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:00.613272   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:00.613290   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:00.613569   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:00.613744   28861 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:56:00.613908   28861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:00.613927   28861 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:56:00.616511   28861 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:00.616870   28861 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:00.616907   28861 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:00.617044   28861 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:56:00.617229   28861 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:56:00.617393   28861 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:56:00.617550   28861 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	W0910 17:56:19.169354   28861 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	W0910 17:56:19.169458   28861 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0910 17:56:19.169483   28861 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:19.169519   28861 status.go:257] ha-558946-m02 status: &{Name:ha-558946-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 17:56:19.169537   28861 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:19.169547   28861 status.go:255] checking status of ha-558946-m03 ...
	I0910 17:56:19.169856   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:19.169894   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:19.184531   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I0910 17:56:19.184920   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:19.185526   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:19.185554   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:19.185943   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:19.186152   28861 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:56:19.187977   28861 status.go:330] ha-558946-m03 host status = "Running" (err=<nil>)
	I0910 17:56:19.187997   28861 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:19.188324   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:19.188393   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:19.202949   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40751
	I0910 17:56:19.203490   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:19.203960   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:19.203980   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:19.204313   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:19.204476   28861 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:56:19.207072   28861 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:19.207524   28861 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:19.207555   28861 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:19.207749   28861 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:19.208116   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:19.208156   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:19.222503   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33911
	I0910 17:56:19.222820   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:19.223227   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:19.223264   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:19.223554   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:19.223740   28861 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:56:19.223914   28861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:19.223937   28861 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:56:19.226642   28861 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:19.227072   28861 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:19.227108   28861 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:19.227235   28861 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:56:19.227378   28861 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:56:19.227503   28861 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:56:19.227638   28861 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:56:19.315263   28861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:19.332159   28861 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:19.332187   28861 api_server.go:166] Checking apiserver status ...
	I0910 17:56:19.332218   28861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:19.346585   28861 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0910 17:56:19.356495   28861 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:19.356543   28861 ssh_runner.go:195] Run: ls
	I0910 17:56:19.361360   28861 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:19.367721   28861 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:19.367741   28861 status.go:422] ha-558946-m03 apiserver status = Running (err=<nil>)
	I0910 17:56:19.367751   28861 status.go:257] ha-558946-m03 status: &{Name:ha-558946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:19.367769   28861 status.go:255] checking status of ha-558946-m04 ...
	I0910 17:56:19.368056   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:19.368096   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:19.382659   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0910 17:56:19.383072   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:19.383540   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:19.383561   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:19.383853   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:19.384022   28861 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:56:19.385388   28861 status.go:330] ha-558946-m04 host status = "Running" (err=<nil>)
	I0910 17:56:19.385400   28861 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:19.385659   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:19.385689   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:19.399444   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42933
	I0910 17:56:19.399778   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:19.400174   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:19.400187   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:19.400444   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:19.400597   28861 main.go:141] libmachine: (ha-558946-m04) Calling .GetIP
	I0910 17:56:19.403293   28861 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:19.403713   28861 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:19.403736   28861 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:19.403872   28861 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:19.404256   28861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:19.404311   28861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:19.417872   28861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I0910 17:56:19.418268   28861 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:19.418803   28861 main.go:141] libmachine: Using API Version  1
	I0910 17:56:19.418848   28861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:19.419118   28861 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:19.419300   28861 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 17:56:19.419471   28861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:19.419497   28861 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 17:56:19.421912   28861 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:19.422285   28861 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:19.422315   28861 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:19.422450   28861 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 17:56:19.422613   28861 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 17:56:19.422747   28861 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 17:56:19.422878   28861 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 17:56:19.513806   28861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:19.531196   28861 status.go:257] ha-558946-m04 status: &{Name:ha-558946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-558946 -n ha-558946
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-558946 logs -n 25: (1.354576421s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1374294018/001/cp-test_ha-558946-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946:/home/docker/cp-test_ha-558946-m03_ha-558946.txt                       |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946 sudo cat                                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946.txt                                 |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m02:/home/docker/cp-test_ha-558946-m03_ha-558946-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m02 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04:/home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m04 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp testdata/cp-test.txt                                                | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1374294018/001/cp-test_ha-558946-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946:/home/docker/cp-test_ha-558946-m04_ha-558946.txt                       |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946 sudo cat                                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946.txt                                 |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m02:/home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m02 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03:/home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m03 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-558946 node stop m02 -v=7                                                     | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:49:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:49:39.086967   24502 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:49:39.087076   24502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:39.087088   24502 out.go:358] Setting ErrFile to fd 2...
	I0910 17:49:39.087093   24502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:39.087295   24502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:49:39.087922   24502 out.go:352] Setting JSON to false
	I0910 17:49:39.088839   24502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1931,"bootTime":1725988648,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:49:39.088900   24502 start.go:139] virtualization: kvm guest
	I0910 17:49:39.090775   24502 out.go:177] * [ha-558946] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:49:39.091795   24502 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:49:39.091834   24502 notify.go:220] Checking for updates...
	I0910 17:49:39.093979   24502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:49:39.095078   24502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:49:39.096084   24502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:49:39.097036   24502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:49:39.098065   24502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:49:39.099338   24502 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:49:39.132527   24502 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 17:49:39.133697   24502 start.go:297] selected driver: kvm2
	I0910 17:49:39.133707   24502 start.go:901] validating driver "kvm2" against <nil>
	I0910 17:49:39.133716   24502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:49:39.134329   24502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:49:39.134391   24502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:49:39.148496   24502 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:49:39.148548   24502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:49:39.148733   24502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:49:39.148762   24502 cni.go:84] Creating CNI manager for ""
	I0910 17:49:39.148768   24502 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0910 17:49:39.148775   24502 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 17:49:39.148813   24502 start.go:340] cluster config:
	{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0910 17:49:39.148892   24502 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:49:39.150381   24502 out.go:177] * Starting "ha-558946" primary control-plane node in "ha-558946" cluster
	I0910 17:49:39.151311   24502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:49:39.151349   24502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:49:39.151357   24502 cache.go:56] Caching tarball of preloaded images
	I0910 17:49:39.151422   24502 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:49:39.151432   24502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:49:39.151708   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:49:39.151728   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json: {Name:mkfc34283f0a4aac201e0c3ede39cbef107c60af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:49:39.151850   24502 start.go:360] acquireMachinesLock for ha-558946: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:49:39.151876   24502 start.go:364] duration metric: took 14.944µs to acquireMachinesLock for "ha-558946"
	I0910 17:49:39.151892   24502 start.go:93] Provisioning new machine with config: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:49:39.151937   24502 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 17:49:39.154101   24502 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 17:49:39.154205   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:49:39.154246   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:49:39.167763   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0910 17:49:39.168154   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:49:39.168659   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:49:39.168682   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:49:39.168967   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:49:39.169149   24502 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:49:39.169300   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:49:39.169443   24502 start.go:159] libmachine.API.Create for "ha-558946" (driver="kvm2")
	I0910 17:49:39.169469   24502 client.go:168] LocalClient.Create starting
	I0910 17:49:39.169498   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 17:49:39.169532   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:49:39.169548   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:49:39.169615   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 17:49:39.169640   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:49:39.169656   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:49:39.169689   24502 main.go:141] libmachine: Running pre-create checks...
	I0910 17:49:39.169715   24502 main.go:141] libmachine: (ha-558946) Calling .PreCreateCheck
	I0910 17:49:39.170013   24502 main.go:141] libmachine: (ha-558946) Calling .GetConfigRaw
	I0910 17:49:39.170340   24502 main.go:141] libmachine: Creating machine...
	I0910 17:49:39.170352   24502 main.go:141] libmachine: (ha-558946) Calling .Create
	I0910 17:49:39.170460   24502 main.go:141] libmachine: (ha-558946) Creating KVM machine...
	I0910 17:49:39.171642   24502 main.go:141] libmachine: (ha-558946) DBG | found existing default KVM network
	I0910 17:49:39.172283   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.172172   24525 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0910 17:49:39.172333   24502 main.go:141] libmachine: (ha-558946) DBG | created network xml: 
	I0910 17:49:39.172352   24502 main.go:141] libmachine: (ha-558946) DBG | <network>
	I0910 17:49:39.172374   24502 main.go:141] libmachine: (ha-558946) DBG |   <name>mk-ha-558946</name>
	I0910 17:49:39.172387   24502 main.go:141] libmachine: (ha-558946) DBG |   <dns enable='no'/>
	I0910 17:49:39.172397   24502 main.go:141] libmachine: (ha-558946) DBG |   
	I0910 17:49:39.172408   24502 main.go:141] libmachine: (ha-558946) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0910 17:49:39.172417   24502 main.go:141] libmachine: (ha-558946) DBG |     <dhcp>
	I0910 17:49:39.172433   24502 main.go:141] libmachine: (ha-558946) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0910 17:49:39.172444   24502 main.go:141] libmachine: (ha-558946) DBG |     </dhcp>
	I0910 17:49:39.172454   24502 main.go:141] libmachine: (ha-558946) DBG |   </ip>
	I0910 17:49:39.172462   24502 main.go:141] libmachine: (ha-558946) DBG |   
	I0910 17:49:39.172471   24502 main.go:141] libmachine: (ha-558946) DBG | </network>
	I0910 17:49:39.172477   24502 main.go:141] libmachine: (ha-558946) DBG | 
	I0910 17:49:39.176861   24502 main.go:141] libmachine: (ha-558946) DBG | trying to create private KVM network mk-ha-558946 192.168.39.0/24...
	I0910 17:49:39.239779   24502 main.go:141] libmachine: (ha-558946) DBG | private KVM network mk-ha-558946 192.168.39.0/24 created
	I0910 17:49:39.239809   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.239740   24525 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:49:39.239837   24502 main.go:141] libmachine: (ha-558946) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946 ...
	I0910 17:49:39.239857   24502 main.go:141] libmachine: (ha-558946) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:49:39.239871   24502 main.go:141] libmachine: (ha-558946) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:49:39.479765   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.479649   24525 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa...
	I0910 17:49:39.643695   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.643588   24525 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/ha-558946.rawdisk...
	I0910 17:49:39.643718   24502 main.go:141] libmachine: (ha-558946) DBG | Writing magic tar header
	I0910 17:49:39.643731   24502 main.go:141] libmachine: (ha-558946) DBG | Writing SSH key tar header
	I0910 17:49:39.643742   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.643695   24525 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946 ...
	I0910 17:49:39.643824   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946
	I0910 17:49:39.643862   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946 (perms=drwx------)
	I0910 17:49:39.643873   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 17:49:39.643888   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:49:39.643902   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 17:49:39.643912   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:49:39.643924   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:49:39.643934   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home
	I0910 17:49:39.643945   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:49:39.643956   24502 main.go:141] libmachine: (ha-558946) DBG | Skipping /home - not owner
	I0910 17:49:39.643994   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 17:49:39.644020   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 17:49:39.644029   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:49:39.644042   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:49:39.644056   24502 main.go:141] libmachine: (ha-558946) Creating domain...
	I0910 17:49:39.644855   24502 main.go:141] libmachine: (ha-558946) define libvirt domain using xml: 
	I0910 17:49:39.644875   24502 main.go:141] libmachine: (ha-558946) <domain type='kvm'>
	I0910 17:49:39.644884   24502 main.go:141] libmachine: (ha-558946)   <name>ha-558946</name>
	I0910 17:49:39.644899   24502 main.go:141] libmachine: (ha-558946)   <memory unit='MiB'>2200</memory>
	I0910 17:49:39.644911   24502 main.go:141] libmachine: (ha-558946)   <vcpu>2</vcpu>
	I0910 17:49:39.644921   24502 main.go:141] libmachine: (ha-558946)   <features>
	I0910 17:49:39.644930   24502 main.go:141] libmachine: (ha-558946)     <acpi/>
	I0910 17:49:39.644941   24502 main.go:141] libmachine: (ha-558946)     <apic/>
	I0910 17:49:39.644948   24502 main.go:141] libmachine: (ha-558946)     <pae/>
	I0910 17:49:39.644958   24502 main.go:141] libmachine: (ha-558946)     
	I0910 17:49:39.644965   24502 main.go:141] libmachine: (ha-558946)   </features>
	I0910 17:49:39.644981   24502 main.go:141] libmachine: (ha-558946)   <cpu mode='host-passthrough'>
	I0910 17:49:39.645005   24502 main.go:141] libmachine: (ha-558946)   
	I0910 17:49:39.645024   24502 main.go:141] libmachine: (ha-558946)   </cpu>
	I0910 17:49:39.645044   24502 main.go:141] libmachine: (ha-558946)   <os>
	I0910 17:49:39.645060   24502 main.go:141] libmachine: (ha-558946)     <type>hvm</type>
	I0910 17:49:39.645093   24502 main.go:141] libmachine: (ha-558946)     <boot dev='cdrom'/>
	I0910 17:49:39.645108   24502 main.go:141] libmachine: (ha-558946)     <boot dev='hd'/>
	I0910 17:49:39.645121   24502 main.go:141] libmachine: (ha-558946)     <bootmenu enable='no'/>
	I0910 17:49:39.645130   24502 main.go:141] libmachine: (ha-558946)   </os>
	I0910 17:49:39.645141   24502 main.go:141] libmachine: (ha-558946)   <devices>
	I0910 17:49:39.645152   24502 main.go:141] libmachine: (ha-558946)     <disk type='file' device='cdrom'>
	I0910 17:49:39.645167   24502 main.go:141] libmachine: (ha-558946)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/boot2docker.iso'/>
	I0910 17:49:39.645175   24502 main.go:141] libmachine: (ha-558946)       <target dev='hdc' bus='scsi'/>
	I0910 17:49:39.645199   24502 main.go:141] libmachine: (ha-558946)       <readonly/>
	I0910 17:49:39.645220   24502 main.go:141] libmachine: (ha-558946)     </disk>
	I0910 17:49:39.645234   24502 main.go:141] libmachine: (ha-558946)     <disk type='file' device='disk'>
	I0910 17:49:39.645245   24502 main.go:141] libmachine: (ha-558946)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:49:39.645258   24502 main.go:141] libmachine: (ha-558946)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/ha-558946.rawdisk'/>
	I0910 17:49:39.645272   24502 main.go:141] libmachine: (ha-558946)       <target dev='hda' bus='virtio'/>
	I0910 17:49:39.645285   24502 main.go:141] libmachine: (ha-558946)     </disk>
	I0910 17:49:39.645301   24502 main.go:141] libmachine: (ha-558946)     <interface type='network'>
	I0910 17:49:39.645324   24502 main.go:141] libmachine: (ha-558946)       <source network='mk-ha-558946'/>
	I0910 17:49:39.645344   24502 main.go:141] libmachine: (ha-558946)       <model type='virtio'/>
	I0910 17:49:39.645355   24502 main.go:141] libmachine: (ha-558946)     </interface>
	I0910 17:49:39.645370   24502 main.go:141] libmachine: (ha-558946)     <interface type='network'>
	I0910 17:49:39.645399   24502 main.go:141] libmachine: (ha-558946)       <source network='default'/>
	I0910 17:49:39.645422   24502 main.go:141] libmachine: (ha-558946)       <model type='virtio'/>
	I0910 17:49:39.645436   24502 main.go:141] libmachine: (ha-558946)     </interface>
	I0910 17:49:39.645447   24502 main.go:141] libmachine: (ha-558946)     <serial type='pty'>
	I0910 17:49:39.645457   24502 main.go:141] libmachine: (ha-558946)       <target port='0'/>
	I0910 17:49:39.645479   24502 main.go:141] libmachine: (ha-558946)     </serial>
	I0910 17:49:39.645496   24502 main.go:141] libmachine: (ha-558946)     <console type='pty'>
	I0910 17:49:39.645506   24502 main.go:141] libmachine: (ha-558946)       <target type='serial' port='0'/>
	I0910 17:49:39.645528   24502 main.go:141] libmachine: (ha-558946)     </console>
	I0910 17:49:39.645543   24502 main.go:141] libmachine: (ha-558946)     <rng model='virtio'>
	I0910 17:49:39.645556   24502 main.go:141] libmachine: (ha-558946)       <backend model='random'>/dev/random</backend>
	I0910 17:49:39.645566   24502 main.go:141] libmachine: (ha-558946)     </rng>
	I0910 17:49:39.645577   24502 main.go:141] libmachine: (ha-558946)     
	I0910 17:49:39.645587   24502 main.go:141] libmachine: (ha-558946)     
	I0910 17:49:39.645599   24502 main.go:141] libmachine: (ha-558946)   </devices>
	I0910 17:49:39.645610   24502 main.go:141] libmachine: (ha-558946) </domain>
	I0910 17:49:39.645622   24502 main.go:141] libmachine: (ha-558946) 
	I0910 17:49:39.649700   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:4b:55:87 in network default
	I0910 17:49:39.650271   24502 main.go:141] libmachine: (ha-558946) Ensuring networks are active...
	I0910 17:49:39.650287   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:39.650919   24502 main.go:141] libmachine: (ha-558946) Ensuring network default is active
	I0910 17:49:39.651172   24502 main.go:141] libmachine: (ha-558946) Ensuring network mk-ha-558946 is active
	I0910 17:49:39.651721   24502 main.go:141] libmachine: (ha-558946) Getting domain xml...
	I0910 17:49:39.652420   24502 main.go:141] libmachine: (ha-558946) Creating domain...
	I0910 17:49:40.822021   24502 main.go:141] libmachine: (ha-558946) Waiting to get IP...
	I0910 17:49:40.822641   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:40.822977   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:40.822997   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:40.822958   24525 retry.go:31] will retry after 296.730328ms: waiting for machine to come up
	I0910 17:49:41.121296   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:41.121685   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:41.121714   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:41.121652   24525 retry.go:31] will retry after 247.649187ms: waiting for machine to come up
	I0910 17:49:41.371076   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:41.371451   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:41.371482   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:41.371402   24525 retry.go:31] will retry after 367.998904ms: waiting for machine to come up
	I0910 17:49:41.740855   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:41.741278   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:41.741305   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:41.741226   24525 retry.go:31] will retry after 448.475273ms: waiting for machine to come up
	I0910 17:49:42.190603   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:42.190989   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:42.191013   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:42.190948   24525 retry.go:31] will retry after 694.285595ms: waiting for machine to come up
	I0910 17:49:42.886793   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:42.887139   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:42.887170   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:42.887112   24525 retry.go:31] will retry after 616.508694ms: waiting for machine to come up
	I0910 17:49:43.504695   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:43.505032   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:43.505058   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:43.504998   24525 retry.go:31] will retry after 1.006459093s: waiting for machine to come up
	I0910 17:49:44.512694   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:44.513136   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:44.513164   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:44.513091   24525 retry.go:31] will retry after 1.034183837s: waiting for machine to come up
	I0910 17:49:45.548509   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:45.548883   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:45.548910   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:45.548832   24525 retry.go:31] will retry after 1.839305323s: waiting for machine to come up
	I0910 17:49:47.390674   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:47.391133   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:47.391157   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:47.391056   24525 retry.go:31] will retry after 1.664309448s: waiting for machine to come up
	I0910 17:49:49.057865   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:49.058330   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:49.058356   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:49.058260   24525 retry.go:31] will retry after 1.942449004s: waiting for machine to come up
	I0910 17:49:51.002278   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:51.002667   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:51.002692   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:51.002634   24525 retry.go:31] will retry after 3.010752626s: waiting for machine to come up
	I0910 17:49:54.014576   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:54.014962   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:54.014991   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:54.014932   24525 retry.go:31] will retry after 3.22703265s: waiting for machine to come up
	I0910 17:49:57.245619   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:57.246008   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:57.246033   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:57.245978   24525 retry.go:31] will retry after 4.311890961s: waiting for machine to come up
	I0910 17:50:01.561029   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.561445   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has current primary IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.561464   24502 main.go:141] libmachine: (ha-558946) Found IP for machine: 192.168.39.109
	I0910 17:50:01.561477   24502 main.go:141] libmachine: (ha-558946) Reserving static IP address...
	I0910 17:50:01.561854   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find host DHCP lease matching {name: "ha-558946", mac: "52:54:00:19:8f:4f", ip: "192.168.39.109"} in network mk-ha-558946
	I0910 17:50:01.629833   24502 main.go:141] libmachine: (ha-558946) DBG | Getting to WaitForSSH function...
	I0910 17:50:01.629864   24502 main.go:141] libmachine: (ha-558946) Reserved static IP address: 192.168.39.109
	I0910 17:50:01.629879   24502 main.go:141] libmachine: (ha-558946) Waiting for SSH to be available...
	I0910 17:50:01.632245   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.632658   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:01.632684   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.632836   24502 main.go:141] libmachine: (ha-558946) DBG | Using SSH client type: external
	I0910 17:50:01.632862   24502 main.go:141] libmachine: (ha-558946) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa (-rw-------)
	I0910 17:50:01.632904   24502 main.go:141] libmachine: (ha-558946) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:50:01.632919   24502 main.go:141] libmachine: (ha-558946) DBG | About to run SSH command:
	I0910 17:50:01.632947   24502 main.go:141] libmachine: (ha-558946) DBG | exit 0
	I0910 17:50:01.757218   24502 main.go:141] libmachine: (ha-558946) DBG | SSH cmd err, output: <nil>: 
	I0910 17:50:01.757663   24502 main.go:141] libmachine: (ha-558946) KVM machine creation complete!
	I0910 17:50:01.758039   24502 main.go:141] libmachine: (ha-558946) Calling .GetConfigRaw
	I0910 17:50:01.758694   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:01.758891   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:01.759070   24502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:50:01.759103   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:01.760480   24502 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:50:01.760495   24502 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:50:01.760500   24502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:50:01.760505   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:01.762521   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.762818   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:01.762840   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.762969   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:01.763129   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.763273   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.763389   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:01.763558   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:01.763736   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:01.763747   24502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:50:01.868652   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:50:01.868672   24502 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:50:01.868679   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:01.871336   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.871635   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:01.871671   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.871823   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:01.872030   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.872173   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.872333   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:01.872499   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:01.872667   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:01.872681   24502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:50:01.977579   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:50:01.977675   24502 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:50:01.977690   24502 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:50:01.977703   24502 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:50:01.977941   24502 buildroot.go:166] provisioning hostname "ha-558946"
	I0910 17:50:01.977962   24502 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:50:01.978147   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:01.980520   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.980849   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:01.980867   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.981010   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:01.981243   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.981430   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.981565   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:01.981722   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:01.981898   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:01.981913   24502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-558946 && echo "ha-558946" | sudo tee /etc/hostname
	I0910 17:50:02.099018   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946
	
	I0910 17:50:02.099048   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.101744   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.102095   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.102122   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.102297   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:02.102444   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.102584   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.102706   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:02.102827   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:02.103035   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:02.103053   24502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-558946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-558946/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-558946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:50:02.213905   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:50:02.213934   24502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:50:02.213973   24502 buildroot.go:174] setting up certificates
	I0910 17:50:02.213982   24502 provision.go:84] configureAuth start
	I0910 17:50:02.213991   24502 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:50:02.214288   24502 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:50:02.216720   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.217142   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.217171   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.217361   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.219240   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.219515   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.219549   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.219625   24502 provision.go:143] copyHostCerts
	I0910 17:50:02.219663   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:50:02.219722   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 17:50:02.219733   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:50:02.219819   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:50:02.219925   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:50:02.219945   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 17:50:02.219952   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:50:02.219977   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:50:02.220032   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:50:02.220047   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 17:50:02.220053   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:50:02.220075   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:50:02.220131   24502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.ha-558946 san=[127.0.0.1 192.168.39.109 ha-558946 localhost minikube]
	I0910 17:50:02.548645   24502 provision.go:177] copyRemoteCerts
	I0910 17:50:02.548693   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:50:02.548713   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.551327   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.551634   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.551653   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.551829   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:02.552021   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.552155   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:02.552283   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:02.634777   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 17:50:02.634840   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0910 17:50:02.659335   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 17:50:02.659396   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 17:50:02.682832   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 17:50:02.682905   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:50:02.705354   24502 provision.go:87] duration metric: took 491.359768ms to configureAuth
	I0910 17:50:02.705380   24502 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:50:02.705582   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:02.705664   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.707934   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.708274   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.708300   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.708465   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:02.708655   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.708815   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.708931   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:02.709106   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:02.709393   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:02.709417   24502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 17:50:02.924108   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 17:50:02.924134   24502 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:50:02.924145   24502 main.go:141] libmachine: (ha-558946) Calling .GetURL
	I0910 17:50:02.925196   24502 main.go:141] libmachine: (ha-558946) DBG | Using libvirt version 6000000
	I0910 17:50:02.927214   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.927556   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.927582   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.927763   24502 main.go:141] libmachine: Docker is up and running!
	I0910 17:50:02.927776   24502 main.go:141] libmachine: Reticulating splines...
	I0910 17:50:02.927783   24502 client.go:171] duration metric: took 23.758306556s to LocalClient.Create
	I0910 17:50:02.927804   24502 start.go:167] duration metric: took 23.758360536s to libmachine.API.Create "ha-558946"
	I0910 17:50:02.927815   24502 start.go:293] postStartSetup for "ha-558946" (driver="kvm2")
	I0910 17:50:02.927827   24502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:50:02.927847   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:02.928053   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:50:02.928072   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.929894   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.930215   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.930244   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.930336   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:02.930498   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.930642   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:02.930800   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:03.011064   24502 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:50:03.015180   24502 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:50:03.015199   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 17:50:03.015261   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 17:50:03.015339   24502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 17:50:03.015350   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 17:50:03.015435   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 17:50:03.024242   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:50:03.047404   24502 start.go:296] duration metric: took 119.576444ms for postStartSetup
	I0910 17:50:03.047451   24502 main.go:141] libmachine: (ha-558946) Calling .GetConfigRaw
	I0910 17:50:03.048018   24502 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:50:03.050509   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.050869   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.050888   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.051134   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:50:03.051298   24502 start.go:128] duration metric: took 23.899351421s to createHost
	I0910 17:50:03.051317   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:03.053313   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.053576   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.053601   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.053715   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:03.053871   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:03.054002   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:03.054092   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:03.054225   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:03.054386   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:03.054399   24502 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:50:03.157649   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725990603.133866575
	
	I0910 17:50:03.157667   24502 fix.go:216] guest clock: 1725990603.133866575
	I0910 17:50:03.157674   24502 fix.go:229] Guest: 2024-09-10 17:50:03.133866575 +0000 UTC Remote: 2024-09-10 17:50:03.051308157 +0000 UTC m=+23.997137359 (delta=82.558418ms)
	I0910 17:50:03.157703   24502 fix.go:200] guest clock delta is within tolerance: 82.558418ms
	I0910 17:50:03.157710   24502 start.go:83] releasing machines lock for "ha-558946", held for 24.005824756s
	I0910 17:50:03.157744   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:03.157996   24502 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:50:03.160405   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.160705   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.160733   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.160895   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:03.161301   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:03.161469   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:03.161517   24502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:50:03.161570   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:03.161651   24502 ssh_runner.go:195] Run: cat /version.json
	I0910 17:50:03.161672   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:03.163837   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.164105   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.164124   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.164143   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.164319   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:03.164480   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:03.164618   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.164630   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.164638   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:03.164774   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:03.164825   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:03.165158   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:03.165330   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:03.165490   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:03.262615   24502 ssh_runner.go:195] Run: systemctl --version
	I0910 17:50:03.268168   24502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 17:50:03.424149   24502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:50:03.431627   24502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:50:03.431728   24502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:50:03.447902   24502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:50:03.447920   24502 start.go:495] detecting cgroup driver to use...
	I0910 17:50:03.447970   24502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:50:03.464681   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:50:03.478344   24502 docker.go:217] disabling cri-docker service (if available) ...
	I0910 17:50:03.478393   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 17:50:03.491617   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 17:50:03.504948   24502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 17:50:03.623678   24502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 17:50:03.777986   24502 docker.go:233] disabling docker service ...
	I0910 17:50:03.778053   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 17:50:03.795678   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 17:50:03.807738   24502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 17:50:03.927114   24502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 17:50:04.046700   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 17:50:04.061573   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:50:04.079740   24502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 17:50:04.079800   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.089945   24502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 17:50:04.090001   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.100275   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.110278   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.120193   24502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:50:04.130323   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.140410   24502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.156505   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.166564   24502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:50:04.175577   24502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 17:50:04.175615   24502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 17:50:04.187687   24502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:50:04.197125   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:50:04.314220   24502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 17:50:04.403163   24502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 17:50:04.403227   24502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 17:50:04.407880   24502 start.go:563] Will wait 60s for crictl version
	I0910 17:50:04.407927   24502 ssh_runner.go:195] Run: which crictl
	I0910 17:50:04.411519   24502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:50:04.448166   24502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 17:50:04.448229   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:50:04.475650   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:50:04.505995   24502 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 17:50:04.507159   24502 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:50:04.509693   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:04.510041   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:04.510064   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:04.510257   24502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:50:04.514205   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:50:04.526831   24502 kubeadm.go:883] updating cluster {Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 17:50:04.526952   24502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:50:04.527013   24502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:50:04.561988   24502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 17:50:04.562047   24502 ssh_runner.go:195] Run: which lz4
	I0910 17:50:04.565573   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0910 17:50:04.565650   24502 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 17:50:04.569559   24502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 17:50:04.569581   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 17:50:05.859847   24502 crio.go:462] duration metric: took 1.294220445s to copy over tarball
	I0910 17:50:05.859916   24502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 17:50:07.877493   24502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.017547737s)
	I0910 17:50:07.877524   24502 crio.go:469] duration metric: took 2.017650904s to extract the tarball
	I0910 17:50:07.877533   24502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 17:50:07.914725   24502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:50:07.958892   24502 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 17:50:07.958913   24502 cache_images.go:84] Images are preloaded, skipping loading
	I0910 17:50:07.958920   24502 kubeadm.go:934] updating node { 192.168.39.109 8443 v1.31.0 crio true true} ...
	I0910 17:50:07.959026   24502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-558946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:50:07.959104   24502 ssh_runner.go:195] Run: crio config
	I0910 17:50:08.002476   24502 cni.go:84] Creating CNI manager for ""
	I0910 17:50:08.002493   24502 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 17:50:08.002503   24502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 17:50:08.002528   24502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-558946 NodeName:ha-558946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 17:50:08.002673   24502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-558946"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 17:50:08.002696   24502 kube-vip.go:115] generating kube-vip config ...
	I0910 17:50:08.002750   24502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 17:50:08.019635   24502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 17:50:08.019728   24502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0910 17:50:08.019787   24502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:50:08.030022   24502 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 17:50:08.030085   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0910 17:50:08.039779   24502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0910 17:50:08.056653   24502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:50:08.072802   24502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0910 17:50:08.088307   24502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0910 17:50:08.103758   24502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0910 17:50:08.107195   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:50:08.118914   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:50:08.241425   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:50:08.259439   24502 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946 for IP: 192.168.39.109
	I0910 17:50:08.259476   24502 certs.go:194] generating shared ca certs ...
	I0910 17:50:08.259495   24502 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.259673   24502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 17:50:08.259726   24502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 17:50:08.259740   24502 certs.go:256] generating profile certs ...
	I0910 17:50:08.259806   24502 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key
	I0910 17:50:08.259830   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt with IP's: []
	I0910 17:50:08.416618   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt ...
	I0910 17:50:08.416641   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt: {Name:mk02a24e9066514871a2e5b41e9bcd6c7425a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.416791   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key ...
	I0910 17:50:08.416801   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key: {Name:mk0aa9a9e3d6cec45852bec5c42bc0b52d7701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.416878   24502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.dea87bef
	I0910 17:50:08.416893   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.dea87bef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109 192.168.39.254]
	I0910 17:50:08.652698   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.dea87bef ...
	I0910 17:50:08.652724   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.dea87bef: {Name:mk5e0b96cb3e4be0397b134fb9c806462cb4f639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.652873   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.dea87bef ...
	I0910 17:50:08.652885   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.dea87bef: {Name:mkadf564b2290466f24114dda6ad78ad96425087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.652961   24502 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.dea87bef -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt
	I0910 17:50:08.653045   24502 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.dea87bef -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key
	I0910 17:50:08.653135   24502 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key
	I0910 17:50:08.653155   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt with IP's: []
	I0910 17:50:08.891264   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt ...
	I0910 17:50:08.891293   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt: {Name:mk8c6979845b5ba1e31bbcdbd008b433a414d8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.891475   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key ...
	I0910 17:50:08.891492   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key: {Name:mk2be15a3801bc87359871b239ea8db29babef34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.891583   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 17:50:08.891605   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 17:50:08.891623   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 17:50:08.891641   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 17:50:08.891658   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 17:50:08.891674   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 17:50:08.891686   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 17:50:08.891704   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 17:50:08.891763   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 17:50:08.891806   24502 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 17:50:08.891820   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 17:50:08.891854   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:50:08.891886   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:50:08.891915   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 17:50:08.891968   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:50:08.892004   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 17:50:08.892023   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 17:50:08.892041   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:08.892574   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:50:08.920188   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:50:08.950243   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:50:08.980967   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 17:50:09.018266   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 17:50:09.053699   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 17:50:09.078323   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:50:09.102014   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:50:09.126110   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 17:50:09.148643   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 17:50:09.171899   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:50:09.195493   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 17:50:09.211404   24502 ssh_runner.go:195] Run: openssl version
	I0910 17:50:09.217163   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:50:09.227301   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:09.231678   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:09.231725   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:09.237377   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:50:09.247109   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 17:50:09.257172   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 17:50:09.261509   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 17:50:09.261545   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 17:50:09.266952   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 17:50:09.276846   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 17:50:09.286657   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 17:50:09.290950   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 17:50:09.290991   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 17:50:09.296343   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 17:50:09.306132   24502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:50:09.310011   24502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:50:09.310061   24502 kubeadm.go:392] StartCluster: {Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:50:09.310128   24502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 17:50:09.310169   24502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 17:50:09.344121   24502 cri.go:89] found id: ""
	I0910 17:50:09.344178   24502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 17:50:09.353363   24502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 17:50:09.362509   24502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 17:50:09.371504   24502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 17:50:09.371521   24502 kubeadm.go:157] found existing configuration files:
	
	I0910 17:50:09.371562   24502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 17:50:09.380093   24502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 17:50:09.380144   24502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 17:50:09.389132   24502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 17:50:09.397443   24502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 17:50:09.397488   24502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 17:50:09.406163   24502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 17:50:09.414438   24502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 17:50:09.414483   24502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 17:50:09.423200   24502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 17:50:09.431469   24502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 17:50:09.431513   24502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 17:50:09.440172   24502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 17:50:09.554959   24502 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 17:50:09.555099   24502 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 17:50:09.666189   24502 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 17:50:09.666283   24502 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 17:50:09.666367   24502 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 17:50:09.678602   24502 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 17:50:09.709667   24502 out.go:235]   - Generating certificates and keys ...
	I0910 17:50:09.709792   24502 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 17:50:09.709873   24502 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 17:50:09.844596   24502 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 17:50:10.088833   24502 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 17:50:10.178873   24502 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 17:50:10.264095   24502 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 17:50:10.651300   24502 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 17:50:10.651439   24502 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-558946 localhost] and IPs [192.168.39.109 127.0.0.1 ::1]
	I0910 17:50:10.731932   24502 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 17:50:10.732081   24502 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-558946 localhost] and IPs [192.168.39.109 127.0.0.1 ::1]
	I0910 17:50:11.144773   24502 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 17:50:11.316362   24502 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 17:50:11.492676   24502 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 17:50:11.492747   24502 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 17:50:11.653203   24502 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 17:50:11.907502   24502 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 17:50:12.136495   24502 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 17:50:12.348260   24502 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 17:50:12.558229   24502 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 17:50:12.558766   24502 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 17:50:12.563826   24502 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 17:50:12.565756   24502 out.go:235]   - Booting up control plane ...
	I0910 17:50:12.565856   24502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 17:50:12.565965   24502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 17:50:12.566063   24502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 17:50:12.582150   24502 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 17:50:12.590956   24502 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 17:50:12.591011   24502 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 17:50:12.740364   24502 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 17:50:12.740512   24502 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 17:50:13.740525   24502 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000862182s
	I0910 17:50:13.740620   24502 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 17:50:19.562935   24502 kubeadm.go:310] [api-check] The API server is healthy after 5.825318755s
	I0910 17:50:19.578088   24502 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 17:50:19.596127   24502 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 17:50:19.634765   24502 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 17:50:19.634949   24502 kubeadm.go:310] [mark-control-plane] Marking the node ha-558946 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 17:50:19.646641   24502 kubeadm.go:310] [bootstrap-token] Using token: 6pfcgw.55ya2kbllqozh475
	I0910 17:50:19.648086   24502 out.go:235]   - Configuring RBAC rules ...
	I0910 17:50:19.648186   24502 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 17:50:19.663616   24502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 17:50:19.673774   24502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 17:50:19.677251   24502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 17:50:19.681178   24502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 17:50:19.685377   24502 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 17:50:19.969343   24502 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 17:50:20.404598   24502 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 17:50:20.970536   24502 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 17:50:20.971538   24502 kubeadm.go:310] 
	I0910 17:50:20.971612   24502 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 17:50:20.971623   24502 kubeadm.go:310] 
	I0910 17:50:20.971713   24502 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 17:50:20.971730   24502 kubeadm.go:310] 
	I0910 17:50:20.971761   24502 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 17:50:20.971815   24502 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 17:50:20.971880   24502 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 17:50:20.971896   24502 kubeadm.go:310] 
	I0910 17:50:20.971965   24502 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 17:50:20.971976   24502 kubeadm.go:310] 
	I0910 17:50:20.972044   24502 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 17:50:20.972053   24502 kubeadm.go:310] 
	I0910 17:50:20.972122   24502 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 17:50:20.972222   24502 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 17:50:20.972320   24502 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 17:50:20.972329   24502 kubeadm.go:310] 
	I0910 17:50:20.972433   24502 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 17:50:20.972538   24502 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 17:50:20.972547   24502 kubeadm.go:310] 
	I0910 17:50:20.972654   24502 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6pfcgw.55ya2kbllqozh475 \
	I0910 17:50:20.972761   24502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 17:50:20.972798   24502 kubeadm.go:310] 	--control-plane 
	I0910 17:50:20.972808   24502 kubeadm.go:310] 
	I0910 17:50:20.972898   24502 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 17:50:20.972906   24502 kubeadm.go:310] 
	I0910 17:50:20.972973   24502 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6pfcgw.55ya2kbllqozh475 \
	I0910 17:50:20.973100   24502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 17:50:20.974323   24502 kubeadm.go:310] W0910 17:50:09.535495     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:50:20.974612   24502 kubeadm.go:310] W0910 17:50:09.536421     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:50:20.974733   24502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 17:50:20.974759   24502 cni.go:84] Creating CNI manager for ""
	I0910 17:50:20.974771   24502 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 17:50:20.976400   24502 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0910 17:50:20.977678   24502 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0910 17:50:20.983151   24502 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0910 17:50:20.983170   24502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0910 17:50:21.001643   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0910 17:50:21.432325   24502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 17:50:21.432378   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:21.432448   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-558946 minikube.k8s.io/updated_at=2024_09_10T17_50_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-558946 minikube.k8s.io/primary=true
	I0910 17:50:21.597339   24502 ops.go:34] apiserver oom_adj: -16
	I0910 17:50:21.633387   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:22.134183   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:22.634091   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:23.133856   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:23.634143   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:24.134355   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:24.633929   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:25.134000   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:25.286471   24502 kubeadm.go:1113] duration metric: took 3.854135157s to wait for elevateKubeSystemPrivileges
	I0910 17:50:25.286512   24502 kubeadm.go:394] duration metric: took 15.976455198s to StartCluster
	I0910 17:50:25.286533   24502 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:25.286621   24502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:50:25.287196   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:25.287395   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 17:50:25.287394   24502 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:50:25.287416   24502 start.go:241] waiting for startup goroutines ...
	I0910 17:50:25.287432   24502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 17:50:25.287506   24502 addons.go:69] Setting storage-provisioner=true in profile "ha-558946"
	I0910 17:50:25.287513   24502 addons.go:69] Setting default-storageclass=true in profile "ha-558946"
	I0910 17:50:25.287535   24502 addons.go:234] Setting addon storage-provisioner=true in "ha-558946"
	I0910 17:50:25.287539   24502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-558946"
	I0910 17:50:25.287564   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:50:25.287609   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:25.287933   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.287950   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.287966   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.287983   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.302239   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44067
	I0910 17:50:25.302511   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0910 17:50:25.302794   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.302983   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.303343   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.303366   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.303566   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.303598   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.303668   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.303841   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:25.303925   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.304542   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.304597   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.306071   24502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:50:25.306415   24502 kapi.go:59] client config for ha-558946: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt", KeyFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key", CAFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2c360), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 17:50:25.306936   24502 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 17:50:25.307202   24502 addons.go:234] Setting addon default-storageclass=true in "ha-558946"
	I0910 17:50:25.307244   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:50:25.307616   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.307662   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.320561   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I0910 17:50:25.321101   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.321629   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.321647   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.321972   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.322154   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:25.322221   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0910 17:50:25.322511   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.322888   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.322905   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.323233   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.323799   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.323836   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:25.323843   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.325907   24502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 17:50:25.327204   24502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:50:25.327220   24502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 17:50:25.327237   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:25.330683   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:25.331140   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:25.331167   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:25.331322   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:25.331543   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:25.331715   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:25.331871   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:25.339453   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0910 17:50:25.339845   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.340291   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.340316   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.340649   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.340847   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:25.342323   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:25.342514   24502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 17:50:25.342531   24502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 17:50:25.342547   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:25.345740   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:25.346233   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:25.346258   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:25.346409   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:25.346576   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:25.346739   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:25.346861   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:25.465300   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 17:50:25.495431   24502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:50:25.527897   24502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:50:26.004104   24502 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0910 17:50:26.004176   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.004194   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.004478   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.004495   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.004510   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.004519   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.004849   24502 main.go:141] libmachine: (ha-558946) DBG | Closing plugin on server side
	I0910 17:50:26.004863   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.004877   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.004938   24502 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 17:50:26.004955   24502 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 17:50:26.005044   24502 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0910 17:50:26.005054   24502 round_trippers.go:469] Request Headers:
	I0910 17:50:26.005064   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:50:26.005091   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:50:26.013193   24502 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 17:50:26.013699   24502 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0910 17:50:26.013712   24502 round_trippers.go:469] Request Headers:
	I0910 17:50:26.013722   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:50:26.013728   24502 round_trippers.go:473]     Content-Type: application/json
	I0910 17:50:26.013732   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:50:26.018699   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:50:26.018832   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.018849   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.019079   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.019100   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.019107   24502 main.go:141] libmachine: (ha-558946) DBG | Closing plugin on server side
	I0910 17:50:26.263137   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.263162   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.263455   24502 main.go:141] libmachine: (ha-558946) DBG | Closing plugin on server side
	I0910 17:50:26.263491   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.263500   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.263510   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.263520   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.263883   24502 main.go:141] libmachine: (ha-558946) DBG | Closing plugin on server side
	I0910 17:50:26.263917   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.263929   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.265477   24502 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0910 17:50:26.266747   24502 addons.go:510] duration metric: took 979.320996ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0910 17:50:26.266777   24502 start.go:246] waiting for cluster config update ...
	I0910 17:50:26.266788   24502 start.go:255] writing updated cluster config ...
	I0910 17:50:26.268434   24502 out.go:201] 
	I0910 17:50:26.269831   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:26.269896   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:50:26.271493   24502 out.go:177] * Starting "ha-558946-m02" control-plane node in "ha-558946" cluster
	I0910 17:50:26.273011   24502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:50:26.273029   24502 cache.go:56] Caching tarball of preloaded images
	I0910 17:50:26.273114   24502 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:50:26.273127   24502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:50:26.273183   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:50:26.273554   24502 start.go:360] acquireMachinesLock for ha-558946-m02: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:50:26.273591   24502 start.go:364] duration metric: took 20.548µs to acquireMachinesLock for "ha-558946-m02"
	I0910 17:50:26.273604   24502 start.go:93] Provisioning new machine with config: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:50:26.273665   24502 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0910 17:50:26.275158   24502 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 17:50:26.275224   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:26.275244   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:26.289864   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38285
	I0910 17:50:26.290242   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:26.290706   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:26.290723   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:26.291024   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:26.291213   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetMachineName
	I0910 17:50:26.291362   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:26.291524   24502 start.go:159] libmachine.API.Create for "ha-558946" (driver="kvm2")
	I0910 17:50:26.291547   24502 client.go:168] LocalClient.Create starting
	I0910 17:50:26.291578   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 17:50:26.291616   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:50:26.291636   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:50:26.291701   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 17:50:26.291727   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:50:26.291743   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:50:26.291766   24502 main.go:141] libmachine: Running pre-create checks...
	I0910 17:50:26.291785   24502 main.go:141] libmachine: (ha-558946-m02) Calling .PreCreateCheck
	I0910 17:50:26.291927   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetConfigRaw
	I0910 17:50:26.292349   24502 main.go:141] libmachine: Creating machine...
	I0910 17:50:26.292366   24502 main.go:141] libmachine: (ha-558946-m02) Calling .Create
	I0910 17:50:26.292491   24502 main.go:141] libmachine: (ha-558946-m02) Creating KVM machine...
	I0910 17:50:26.293620   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found existing default KVM network
	I0910 17:50:26.293738   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found existing private KVM network mk-ha-558946
	I0910 17:50:26.293883   24502 main.go:141] libmachine: (ha-558946-m02) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02 ...
	I0910 17:50:26.293908   24502 main.go:141] libmachine: (ha-558946-m02) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:50:26.293943   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:26.293859   24863 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:50:26.294030   24502 main.go:141] libmachine: (ha-558946-m02) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:50:26.519575   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:26.519434   24863 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa...
	I0910 17:50:26.605750   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:26.605615   24863 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/ha-558946-m02.rawdisk...
	I0910 17:50:26.605789   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Writing magic tar header
	I0910 17:50:26.605804   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Writing SSH key tar header
	I0910 17:50:26.605818   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:26.605761   24863 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02 ...
	I0910 17:50:26.605929   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02
	I0910 17:50:26.605948   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 17:50:26.605981   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02 (perms=drwx------)
	I0910 17:50:26.606012   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:50:26.606027   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:50:26.606040   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 17:50:26.606051   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 17:50:26.606062   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:50:26.606073   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 17:50:26.606091   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:50:26.606103   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:50:26.606118   24502 main.go:141] libmachine: (ha-558946-m02) Creating domain...
	I0910 17:50:26.606130   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:50:26.606157   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home
	I0910 17:50:26.606179   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Skipping /home - not owner
	I0910 17:50:26.606936   24502 main.go:141] libmachine: (ha-558946-m02) define libvirt domain using xml: 
	I0910 17:50:26.606957   24502 main.go:141] libmachine: (ha-558946-m02) <domain type='kvm'>
	I0910 17:50:26.606966   24502 main.go:141] libmachine: (ha-558946-m02)   <name>ha-558946-m02</name>
	I0910 17:50:26.606977   24502 main.go:141] libmachine: (ha-558946-m02)   <memory unit='MiB'>2200</memory>
	I0910 17:50:26.606988   24502 main.go:141] libmachine: (ha-558946-m02)   <vcpu>2</vcpu>
	I0910 17:50:26.606997   24502 main.go:141] libmachine: (ha-558946-m02)   <features>
	I0910 17:50:26.607005   24502 main.go:141] libmachine: (ha-558946-m02)     <acpi/>
	I0910 17:50:26.607014   24502 main.go:141] libmachine: (ha-558946-m02)     <apic/>
	I0910 17:50:26.607024   24502 main.go:141] libmachine: (ha-558946-m02)     <pae/>
	I0910 17:50:26.607033   24502 main.go:141] libmachine: (ha-558946-m02)     
	I0910 17:50:26.607041   24502 main.go:141] libmachine: (ha-558946-m02)   </features>
	I0910 17:50:26.607051   24502 main.go:141] libmachine: (ha-558946-m02)   <cpu mode='host-passthrough'>
	I0910 17:50:26.607070   24502 main.go:141] libmachine: (ha-558946-m02)   
	I0910 17:50:26.607090   24502 main.go:141] libmachine: (ha-558946-m02)   </cpu>
	I0910 17:50:26.607105   24502 main.go:141] libmachine: (ha-558946-m02)   <os>
	I0910 17:50:26.607113   24502 main.go:141] libmachine: (ha-558946-m02)     <type>hvm</type>
	I0910 17:50:26.607121   24502 main.go:141] libmachine: (ha-558946-m02)     <boot dev='cdrom'/>
	I0910 17:50:26.607127   24502 main.go:141] libmachine: (ha-558946-m02)     <boot dev='hd'/>
	I0910 17:50:26.607134   24502 main.go:141] libmachine: (ha-558946-m02)     <bootmenu enable='no'/>
	I0910 17:50:26.607138   24502 main.go:141] libmachine: (ha-558946-m02)   </os>
	I0910 17:50:26.607144   24502 main.go:141] libmachine: (ha-558946-m02)   <devices>
	I0910 17:50:26.607152   24502 main.go:141] libmachine: (ha-558946-m02)     <disk type='file' device='cdrom'>
	I0910 17:50:26.607160   24502 main.go:141] libmachine: (ha-558946-m02)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/boot2docker.iso'/>
	I0910 17:50:26.607172   24502 main.go:141] libmachine: (ha-558946-m02)       <target dev='hdc' bus='scsi'/>
	I0910 17:50:26.607182   24502 main.go:141] libmachine: (ha-558946-m02)       <readonly/>
	I0910 17:50:26.607192   24502 main.go:141] libmachine: (ha-558946-m02)     </disk>
	I0910 17:50:26.607204   24502 main.go:141] libmachine: (ha-558946-m02)     <disk type='file' device='disk'>
	I0910 17:50:26.607216   24502 main.go:141] libmachine: (ha-558946-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:50:26.607232   24502 main.go:141] libmachine: (ha-558946-m02)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/ha-558946-m02.rawdisk'/>
	I0910 17:50:26.607241   24502 main.go:141] libmachine: (ha-558946-m02)       <target dev='hda' bus='virtio'/>
	I0910 17:50:26.607273   24502 main.go:141] libmachine: (ha-558946-m02)     </disk>
	I0910 17:50:26.607294   24502 main.go:141] libmachine: (ha-558946-m02)     <interface type='network'>
	I0910 17:50:26.607309   24502 main.go:141] libmachine: (ha-558946-m02)       <source network='mk-ha-558946'/>
	I0910 17:50:26.607320   24502 main.go:141] libmachine: (ha-558946-m02)       <model type='virtio'/>
	I0910 17:50:26.607329   24502 main.go:141] libmachine: (ha-558946-m02)     </interface>
	I0910 17:50:26.607341   24502 main.go:141] libmachine: (ha-558946-m02)     <interface type='network'>
	I0910 17:50:26.607355   24502 main.go:141] libmachine: (ha-558946-m02)       <source network='default'/>
	I0910 17:50:26.607369   24502 main.go:141] libmachine: (ha-558946-m02)       <model type='virtio'/>
	I0910 17:50:26.607383   24502 main.go:141] libmachine: (ha-558946-m02)     </interface>
	I0910 17:50:26.607394   24502 main.go:141] libmachine: (ha-558946-m02)     <serial type='pty'>
	I0910 17:50:26.607406   24502 main.go:141] libmachine: (ha-558946-m02)       <target port='0'/>
	I0910 17:50:26.607414   24502 main.go:141] libmachine: (ha-558946-m02)     </serial>
	I0910 17:50:26.607424   24502 main.go:141] libmachine: (ha-558946-m02)     <console type='pty'>
	I0910 17:50:26.607431   24502 main.go:141] libmachine: (ha-558946-m02)       <target type='serial' port='0'/>
	I0910 17:50:26.607447   24502 main.go:141] libmachine: (ha-558946-m02)     </console>
	I0910 17:50:26.607462   24502 main.go:141] libmachine: (ha-558946-m02)     <rng model='virtio'>
	I0910 17:50:26.607473   24502 main.go:141] libmachine: (ha-558946-m02)       <backend model='random'>/dev/random</backend>
	I0910 17:50:26.607480   24502 main.go:141] libmachine: (ha-558946-m02)     </rng>
	I0910 17:50:26.607491   24502 main.go:141] libmachine: (ha-558946-m02)     
	I0910 17:50:26.607501   24502 main.go:141] libmachine: (ha-558946-m02)     
	I0910 17:50:26.607510   24502 main.go:141] libmachine: (ha-558946-m02)   </devices>
	I0910 17:50:26.607519   24502 main.go:141] libmachine: (ha-558946-m02) </domain>
	I0910 17:50:26.607529   24502 main.go:141] libmachine: (ha-558946-m02) 
	I0910 17:50:26.613978   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:54:64:6d in network default
	I0910 17:50:26.614547   24502 main.go:141] libmachine: (ha-558946-m02) Ensuring networks are active...
	I0910 17:50:26.614567   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:26.615166   24502 main.go:141] libmachine: (ha-558946-m02) Ensuring network default is active
	I0910 17:50:26.615504   24502 main.go:141] libmachine: (ha-558946-m02) Ensuring network mk-ha-558946 is active
	I0910 17:50:26.615852   24502 main.go:141] libmachine: (ha-558946-m02) Getting domain xml...
	I0910 17:50:26.616554   24502 main.go:141] libmachine: (ha-558946-m02) Creating domain...
	I0910 17:50:27.911789   24502 main.go:141] libmachine: (ha-558946-m02) Waiting to get IP...
	I0910 17:50:27.912693   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:27.913100   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:27.913133   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:27.913057   24863 retry.go:31] will retry after 265.359054ms: waiting for machine to come up
	I0910 17:50:28.180522   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:28.181044   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:28.181081   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:28.180999   24863 retry.go:31] will retry after 346.921747ms: waiting for machine to come up
	I0910 17:50:28.529416   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:28.529856   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:28.529881   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:28.529812   24863 retry.go:31] will retry after 484.868215ms: waiting for machine to come up
	I0910 17:50:29.016460   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:29.016814   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:29.016839   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:29.016763   24863 retry.go:31] will retry after 587.990914ms: waiting for machine to come up
	I0910 17:50:29.606433   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:29.606820   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:29.606848   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:29.606771   24863 retry.go:31] will retry after 651.119057ms: waiting for machine to come up
	I0910 17:50:30.259417   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:30.259760   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:30.259796   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:30.259735   24863 retry.go:31] will retry after 919.832632ms: waiting for machine to come up
	I0910 17:50:31.180652   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:31.181156   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:31.181178   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:31.181117   24863 retry.go:31] will retry after 1.100585606s: waiting for machine to come up
	I0910 17:50:32.282871   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:32.283254   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:32.283333   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:32.283248   24863 retry.go:31] will retry after 1.162968125s: waiting for machine to come up
	I0910 17:50:33.447357   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:33.447777   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:33.447805   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:33.447742   24863 retry.go:31] will retry after 1.773199242s: waiting for machine to come up
	I0910 17:50:35.222236   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:35.222808   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:35.222839   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:35.222783   24863 retry.go:31] will retry after 1.986522729s: waiting for machine to come up
	I0910 17:50:37.210834   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:37.211199   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:37.211226   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:37.211169   24863 retry.go:31] will retry after 1.791392731s: waiting for machine to come up
	I0910 17:50:39.005044   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:39.005472   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:39.005500   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:39.005423   24863 retry.go:31] will retry after 3.176867694s: waiting for machine to come up
	I0910 17:50:42.184204   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:42.184632   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:42.184662   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:42.184582   24863 retry.go:31] will retry after 4.493314199s: waiting for machine to come up
	I0910 17:50:46.679177   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.679745   24502 main.go:141] libmachine: (ha-558946-m02) Found IP for machine: 192.168.39.96
	I0910 17:50:46.679772   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has current primary IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.679778   24502 main.go:141] libmachine: (ha-558946-m02) Reserving static IP address...
	I0910 17:50:46.680151   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find host DHCP lease matching {name: "ha-558946-m02", mac: "52:54:00:68:52:22", ip: "192.168.39.96"} in network mk-ha-558946
	I0910 17:50:46.749349   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Getting to WaitForSSH function...
	I0910 17:50:46.749369   24502 main.go:141] libmachine: (ha-558946-m02) Reserved static IP address: 192.168.39.96
	I0910 17:50:46.749383   24502 main.go:141] libmachine: (ha-558946-m02) Waiting for SSH to be available...
	I0910 17:50:46.751784   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.752178   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:minikube Clientid:01:52:54:00:68:52:22}
	I0910 17:50:46.752199   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.752345   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Using SSH client type: external
	I0910 17:50:46.752371   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa (-rw-------)
	I0910 17:50:46.752401   24502 main.go:141] libmachine: (ha-558946-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:50:46.752414   24502 main.go:141] libmachine: (ha-558946-m02) DBG | About to run SSH command:
	I0910 17:50:46.752426   24502 main.go:141] libmachine: (ha-558946-m02) DBG | exit 0
	I0910 17:50:46.884926   24502 main.go:141] libmachine: (ha-558946-m02) DBG | SSH cmd err, output: <nil>: 
	I0910 17:50:46.885185   24502 main.go:141] libmachine: (ha-558946-m02) KVM machine creation complete!
	I0910 17:50:46.885469   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetConfigRaw
	I0910 17:50:46.886114   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:46.886290   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:46.886458   24502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:50:46.886475   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:50:46.887790   24502 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:50:46.887802   24502 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:50:46.887807   24502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:50:46.887812   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:46.890903   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.891302   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:46.891318   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.891468   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:46.891662   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:46.891899   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:46.892097   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:46.892272   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:46.892519   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:46.892537   24502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:50:47.004186   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:50:47.004204   24502 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:50:47.004211   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.006918   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.007246   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.007270   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.007496   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.007681   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.007842   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.007965   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.008122   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:47.008321   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:47.008333   24502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:50:47.121864   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:50:47.121923   24502 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:50:47.121932   24502 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:50:47.121943   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetMachineName
	I0910 17:50:47.122176   24502 buildroot.go:166] provisioning hostname "ha-558946-m02"
	I0910 17:50:47.122203   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetMachineName
	I0910 17:50:47.122389   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.124630   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.124980   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.125006   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.125152   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.125439   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.125637   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.125805   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.125965   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:47.126152   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:47.126170   24502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-558946-m02 && echo "ha-558946-m02" | sudo tee /etc/hostname
	I0910 17:50:47.252001   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946-m02
	
	I0910 17:50:47.252044   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.254689   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.255064   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.255094   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.255277   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.255463   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.255609   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.255703   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.255858   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:47.256042   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:47.256059   24502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-558946-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-558946-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-558946-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:50:47.379654   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:50:47.379678   24502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:50:47.379695   24502 buildroot.go:174] setting up certificates
	I0910 17:50:47.379705   24502 provision.go:84] configureAuth start
	I0910 17:50:47.379713   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetMachineName
	I0910 17:50:47.379953   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:50:47.382772   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.383194   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.383227   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.383377   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.385763   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.386073   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.386098   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.386212   24502 provision.go:143] copyHostCerts
	I0910 17:50:47.386253   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:50:47.386283   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 17:50:47.386292   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:50:47.386351   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:50:47.386418   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:50:47.386435   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 17:50:47.386442   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:50:47.386464   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:50:47.386507   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:50:47.386525   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 17:50:47.386531   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:50:47.386552   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:50:47.386597   24502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.ha-558946-m02 san=[127.0.0.1 192.168.39.96 ha-558946-m02 localhost minikube]
	I0910 17:50:47.656823   24502 provision.go:177] copyRemoteCerts
	I0910 17:50:47.656876   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:50:47.656897   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.659317   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.659629   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.659660   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.659804   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.660022   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.660151   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.660279   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 17:50:47.747894   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 17:50:47.747962   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:50:47.775174   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 17:50:47.775243   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:50:47.801718   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 17:50:47.801784   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 17:50:47.828072   24502 provision.go:87] duration metric: took 448.356458ms to configureAuth
	I0910 17:50:47.828094   24502 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:50:47.828297   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:47.828381   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.830678   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.831086   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.831133   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.831274   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.831460   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.831620   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.831763   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.831936   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:47.832077   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:47.832090   24502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 17:50:48.067038   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 17:50:48.067060   24502 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:50:48.067067   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetURL
	I0910 17:50:48.068206   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Using libvirt version 6000000
	I0910 17:50:48.070686   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.071035   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.071059   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.071223   24502 main.go:141] libmachine: Docker is up and running!
	I0910 17:50:48.071233   24502 main.go:141] libmachine: Reticulating splines...
	I0910 17:50:48.071240   24502 client.go:171] duration metric: took 21.779684262s to LocalClient.Create
	I0910 17:50:48.071260   24502 start.go:167] duration metric: took 21.77974298s to libmachine.API.Create "ha-558946"
	I0910 17:50:48.071272   24502 start.go:293] postStartSetup for "ha-558946-m02" (driver="kvm2")
	I0910 17:50:48.071284   24502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:50:48.071305   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.071536   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:50:48.071562   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:48.073425   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.073731   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.073758   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.073922   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:48.074073   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.074226   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:48.074377   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 17:50:48.159138   24502 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:50:48.163448   24502 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:50:48.163468   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 17:50:48.163522   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 17:50:48.163591   24502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 17:50:48.163600   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 17:50:48.163677   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 17:50:48.172521   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:50:48.196168   24502 start.go:296] duration metric: took 124.877281ms for postStartSetup
	I0910 17:50:48.196213   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetConfigRaw
	I0910 17:50:48.196746   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:50:48.199300   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.199635   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.199660   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.199860   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:50:48.200025   24502 start.go:128] duration metric: took 21.926351928s to createHost
	I0910 17:50:48.200046   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:48.202478   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.202835   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.202856   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.203048   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:48.203280   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.203460   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.203641   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:48.203823   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:48.204006   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:48.204016   24502 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:50:48.317757   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725990648.288976030
	
	I0910 17:50:48.317778   24502 fix.go:216] guest clock: 1725990648.288976030
	I0910 17:50:48.317786   24502 fix.go:229] Guest: 2024-09-10 17:50:48.28897603 +0000 UTC Remote: 2024-09-10 17:50:48.200035363 +0000 UTC m=+69.145864566 (delta=88.940667ms)
	I0910 17:50:48.317799   24502 fix.go:200] guest clock delta is within tolerance: 88.940667ms
	I0910 17:50:48.317803   24502 start.go:83] releasing machines lock for "ha-558946-m02", held for 22.04420652s
	I0910 17:50:48.317820   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.318049   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:50:48.320388   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.320723   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.320750   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.322540   24502 out.go:177] * Found network options:
	I0910 17:50:48.323634   24502 out.go:177]   - NO_PROXY=192.168.39.109
	W0910 17:50:48.324768   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 17:50:48.324796   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.325356   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.325504   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.325571   24502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:50:48.325614   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	W0910 17:50:48.325695   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 17:50:48.325752   24502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 17:50:48.325775   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:48.328299   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.328326   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.328637   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.328672   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.328698   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.328713   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.329013   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:48.329044   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:48.329198   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.329207   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.329360   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:48.329422   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:48.329496   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 17:50:48.329546   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 17:50:48.565787   24502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:50:48.571778   24502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:50:48.571827   24502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:50:48.587592   24502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:50:48.587613   24502 start.go:495] detecting cgroup driver to use...
	I0910 17:50:48.587667   24502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:50:48.603346   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:50:48.616332   24502 docker.go:217] disabling cri-docker service (if available) ...
	I0910 17:50:48.616374   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 17:50:48.629056   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 17:50:48.641532   24502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 17:50:48.759370   24502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 17:50:48.897526   24502 docker.go:233] disabling docker service ...
	I0910 17:50:48.897595   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 17:50:48.911400   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 17:50:48.924332   24502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 17:50:49.055513   24502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 17:50:49.183688   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 17:50:49.197405   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:50:49.215069   24502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 17:50:49.215140   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.225078   24502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 17:50:49.225132   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.234974   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.244634   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.254338   24502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:50:49.264276   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.273976   24502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.290130   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.299886   24502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:50:49.308688   24502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 17:50:49.308762   24502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 17:50:49.320759   24502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:50:49.329426   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:50:49.438096   24502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 17:50:49.528595   24502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 17:50:49.528657   24502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 17:50:49.533302   24502 start.go:563] Will wait 60s for crictl version
	I0910 17:50:49.533353   24502 ssh_runner.go:195] Run: which crictl
	I0910 17:50:49.537491   24502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:50:49.578565   24502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 17:50:49.578640   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:50:49.610720   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:50:49.640788   24502 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 17:50:49.642146   24502 out.go:177]   - env NO_PROXY=192.168.39.109
	I0910 17:50:49.643245   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:50:49.645873   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:49.646268   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:49.646293   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:49.646449   24502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:50:49.650924   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:50:49.664779   24502 mustload.go:65] Loading cluster: ha-558946
	I0910 17:50:49.664939   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:49.665246   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:49.665273   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:49.679865   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0910 17:50:49.680243   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:49.680688   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:49.680705   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:49.680978   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:49.681182   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:49.682670   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:50:49.682929   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:49.682959   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:49.698083   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33255
	I0910 17:50:49.698514   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:49.698944   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:49.698957   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:49.699229   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:49.699365   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:49.699545   24502 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946 for IP: 192.168.39.96
	I0910 17:50:49.699559   24502 certs.go:194] generating shared ca certs ...
	I0910 17:50:49.699576   24502 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:49.699683   24502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 17:50:49.699717   24502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 17:50:49.699726   24502 certs.go:256] generating profile certs ...
	I0910 17:50:49.699785   24502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key
	I0910 17:50:49.699808   24502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.01327bff
	I0910 17:50:49.699822   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.01327bff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109 192.168.39.96 192.168.39.254]
	I0910 17:50:50.007327   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.01327bff ...
	I0910 17:50:50.007355   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.01327bff: {Name:mkfa381ae2fc0a445f7d11499df3d390f9773ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:50.007535   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.01327bff ...
	I0910 17:50:50.007552   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.01327bff: {Name:mk1480193644e02512eef0392dfef1eaac9eed03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:50.007652   24502 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.01327bff -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt
	I0910 17:50:50.007778   24502 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.01327bff -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key
	I0910 17:50:50.007900   24502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key
	I0910 17:50:50.007914   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 17:50:50.007925   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 17:50:50.007936   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 17:50:50.007949   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 17:50:50.007961   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 17:50:50.007973   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 17:50:50.007985   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 17:50:50.007997   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 17:50:50.008040   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 17:50:50.008068   24502 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 17:50:50.008077   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 17:50:50.008097   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:50:50.008118   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:50:50.008138   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 17:50:50.008174   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:50:50.008198   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:50.008212   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 17:50:50.008224   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 17:50:50.008265   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:50.011368   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:50.011776   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:50.011803   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:50.012009   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:50.012152   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:50.012305   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:50.012394   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:50.089358   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0910 17:50:50.095687   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0910 17:50:50.109787   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0910 17:50:50.114258   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0910 17:50:50.126575   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0910 17:50:50.130919   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0910 17:50:50.142515   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0910 17:50:50.146718   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0910 17:50:50.156242   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0910 17:50:50.160610   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0910 17:50:50.169811   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0910 17:50:50.173872   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0910 17:50:50.185576   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:50:50.214190   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:50:50.237485   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:50:50.260239   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 17:50:50.283526   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0910 17:50:50.306686   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 17:50:50.330388   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:50:50.356195   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:50:50.378213   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:50:50.401337   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 17:50:50.423979   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 17:50:50.446336   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0910 17:50:50.464222   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0910 17:50:50.481182   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0910 17:50:50.497640   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0910 17:50:50.513639   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0910 17:50:50.530054   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0910 17:50:50.545355   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0910 17:50:50.560571   24502 ssh_runner.go:195] Run: openssl version
	I0910 17:50:50.565852   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 17:50:50.576017   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 17:50:50.580131   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 17:50:50.580174   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 17:50:50.585765   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 17:50:50.596008   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 17:50:50.606192   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 17:50:50.610245   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 17:50:50.610294   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 17:50:50.615708   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 17:50:50.625971   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:50:50.636249   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:50.640536   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:50.640581   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:50.645901   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:50:50.656307   24502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:50:50.660154   24502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:50:50.660201   24502 kubeadm.go:934] updating node {m02 192.168.39.96 8443 v1.31.0 crio true true} ...
	I0910 17:50:50.660284   24502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-558946-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:50:50.660314   24502 kube-vip.go:115] generating kube-vip config ...
	I0910 17:50:50.660349   24502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 17:50:50.676111   24502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 17:50:50.676175   24502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 17:50:50.676226   24502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:50:50.691724   24502 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 17:50:50.691770   24502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 17:50:50.702172   24502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0910 17:50:50.702192   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 17:50:50.702200   24502 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0910 17:50:50.702210   24502 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0910 17:50:50.702239   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 17:50:50.706642   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 17:50:50.706666   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 17:50:51.267193   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 17:50:51.267267   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 17:50:51.272358   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 17:50:51.272392   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 17:50:51.553190   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:50:51.567070   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 17:50:51.567155   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 17:50:51.572688   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 17:50:51.572717   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 17:50:51.864476   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0910 17:50:51.873773   24502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0910 17:50:51.890413   24502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:50:51.906916   24502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 17:50:51.923468   24502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0910 17:50:51.927336   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:50:51.939439   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:50:52.080482   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:50:52.098304   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:50:52.098773   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:52.098828   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:52.113446   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46751
	I0910 17:50:52.113848   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:52.114302   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:52.114316   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:52.114605   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:52.114763   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:52.114925   24502 start.go:317] joinCluster: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:50:52.115031   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 17:50:52.115054   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:52.118056   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:52.118510   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:52.118536   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:52.118675   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:52.118848   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:52.118987   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:52.119153   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:52.267488   24502 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:50:52.267535   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token efs0vc.gxraj55oklb55bap --discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-558946-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443"
	I0910 17:51:13.456561   24502 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token efs0vc.gxraj55oklb55bap --discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-558946-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443": (21.188985738s)
	I0910 17:51:13.456620   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 17:51:13.939315   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-558946-m02 minikube.k8s.io/updated_at=2024_09_10T17_51_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-558946 minikube.k8s.io/primary=false
	I0910 17:51:14.083936   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-558946-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0910 17:51:14.222359   24502 start.go:319] duration metric: took 22.107427814s to joinCluster
	I0910 17:51:14.222492   24502 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:51:14.222804   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:51:14.224007   24502 out.go:177] * Verifying Kubernetes components...
	I0910 17:51:14.225334   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:51:14.506720   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:51:14.561822   24502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:51:14.562140   24502 kapi.go:59] client config for ha-558946: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt", KeyFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key", CAFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2c360), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0910 17:51:14.562238   24502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.109:8443
	I0910 17:51:14.562514   24502 node_ready.go:35] waiting up to 6m0s for node "ha-558946-m02" to be "Ready" ...
	I0910 17:51:14.562681   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:14.562692   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:14.562699   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:14.562703   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:14.573177   24502 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 17:51:15.062865   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:15.062883   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:15.062891   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:15.062894   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:15.066799   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:15.562701   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:15.562721   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:15.562728   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:15.562733   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:15.567488   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:16.062971   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:16.062990   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:16.062998   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:16.063002   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:16.070102   24502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 17:51:16.563715   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:16.563735   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:16.563746   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:16.563751   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:16.567011   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:16.571018   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:17.063398   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:17.063424   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:17.063435   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:17.063442   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:17.066914   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:17.563293   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:17.563313   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:17.563321   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:17.563324   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:17.566577   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:18.063498   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:18.063518   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:18.063525   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:18.063529   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:18.067084   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:18.563143   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:18.563169   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:18.563177   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:18.563182   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:18.567406   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:19.062809   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:19.062831   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:19.062841   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:19.062848   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:19.066248   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:19.066989   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:19.562731   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:19.562749   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:19.562757   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:19.562760   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:19.566574   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:20.063451   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:20.063476   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:20.063486   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:20.063496   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:20.066873   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:20.562908   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:20.562931   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:20.562942   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:20.562947   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:20.565923   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:21.062891   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:21.062924   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:21.062936   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:21.062944   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:21.065940   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:21.563117   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:21.563137   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:21.563147   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:21.563152   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:21.566136   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:21.566568   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:22.062927   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:22.062948   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:22.062955   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:22.062959   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:22.066284   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:22.563595   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:22.563617   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:22.563624   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:22.563631   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:22.566858   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:23.062809   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:23.062829   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:23.062837   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:23.062842   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:23.066208   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:23.563196   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:23.563221   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:23.563232   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:23.563238   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:23.566084   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:23.566655   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:24.062985   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:24.063023   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:24.063030   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:24.063034   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:24.065854   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:24.563378   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:24.563395   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:24.563403   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:24.563406   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:24.566311   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:25.063016   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:25.063038   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:25.063046   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:25.063051   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:25.066024   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:25.563229   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:25.563249   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:25.563258   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:25.563261   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:25.566574   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:25.567168   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:26.063741   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:26.063760   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:26.063767   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:26.063771   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:26.066805   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:26.563428   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:26.563449   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:26.563456   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:26.563459   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:26.567621   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:27.062709   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:27.062731   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:27.062739   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:27.062744   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:27.066061   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:27.563663   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:27.563687   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:27.563695   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:27.563699   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:27.567286   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:27.567776   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:28.063156   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:28.063178   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.063185   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.063192   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.067527   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:28.563482   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:28.563507   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.563516   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.563519   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.566367   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.566948   24502 node_ready.go:49] node "ha-558946-m02" has status "Ready":"True"
	I0910 17:51:28.566980   24502 node_ready.go:38] duration metric: took 14.004409051s for node "ha-558946-m02" to be "Ready" ...
	I0910 17:51:28.566992   24502 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:51:28.567082   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:28.567092   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.567101   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.567107   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.571241   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:28.579735   24502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.579820   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-5pv7s
	I0910 17:51:28.579831   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.579841   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.579849   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.583079   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:28.585580   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.585595   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.585604   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.585612   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.587877   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.588600   24502 pod_ready.go:93] pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.588617   24502 pod_ready.go:82] duration metric: took 8.861813ms for pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.588625   24502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.588681   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fmcmc
	I0910 17:51:28.588691   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.588701   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.588709   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.591647   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.592207   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.592219   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.592225   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.592228   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.595005   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.595955   24502 pod_ready.go:93] pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.595978   24502 pod_ready.go:82] duration metric: took 7.345951ms for pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.595989   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.596049   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946
	I0910 17:51:28.596062   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.596072   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.596081   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.598101   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.598684   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.598698   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.598703   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.598710   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.600798   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.601421   24502 pod_ready.go:93] pod "etcd-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.601444   24502 pod_ready.go:82] duration metric: took 5.442437ms for pod "etcd-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.601454   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.601507   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946-m02
	I0910 17:51:28.601519   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.601529   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.601537   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.603535   24502 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 17:51:28.604125   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:28.604138   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.604145   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.604149   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.606230   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.606781   24502 pod_ready.go:93] pod "etcd-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.606802   24502 pod_ready.go:82] duration metric: took 5.339798ms for pod "etcd-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.606819   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.764183   24502 request.go:632] Waited for 157.311635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946
	I0910 17:51:28.764266   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946
	I0910 17:51:28.764272   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.764281   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.764285   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.767572   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:28.964543   24502 request.go:632] Waited for 196.357743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.964593   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.964598   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.964605   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.964608   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.967663   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:28.968220   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.968241   24502 pod_ready.go:82] duration metric: took 361.411821ms for pod "kube-apiserver-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.968253   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.164556   24502 request.go:632] Waited for 196.24116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m02
	I0910 17:51:29.164621   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m02
	I0910 17:51:29.164630   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.164638   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.164645   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.167001   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:29.364067   24502 request.go:632] Waited for 196.374164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:29.364119   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:29.364124   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.364130   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.364134   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.367203   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:29.367698   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:29.367715   24502 pod_ready.go:82] duration metric: took 399.454798ms for pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.367723   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.563838   24502 request.go:632] Waited for 196.057022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946
	I0910 17:51:29.563911   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946
	I0910 17:51:29.563917   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.563926   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.563930   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.567450   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:29.763711   24502 request.go:632] Waited for 195.646381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:29.763760   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:29.763765   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.763772   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.763775   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.766513   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:29.767086   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:29.767110   24502 pod_ready.go:82] duration metric: took 399.379451ms for pod "kube-controller-manager-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.767125   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.964138   24502 request.go:632] Waited for 196.946066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m02
	I0910 17:51:29.964197   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m02
	I0910 17:51:29.964213   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.964223   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.964229   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.967201   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:30.164383   24502 request.go:632] Waited for 196.414667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:30.164460   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:30.164467   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.164475   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.164484   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.167334   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:30.167763   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:30.167782   24502 pod_ready.go:82] duration metric: took 400.648369ms for pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.167792   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gjqzx" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.363917   24502 request.go:632] Waited for 196.065663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjqzx
	I0910 17:51:30.364004   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjqzx
	I0910 17:51:30.364014   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.364028   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.364037   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.366865   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:30.564369   24502 request.go:632] Waited for 196.350473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:30.564423   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:30.564429   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.564439   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.564444   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.572510   24502 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 17:51:30.572959   24502 pod_ready.go:93] pod "kube-proxy-gjqzx" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:30.572976   24502 pod_ready.go:82] duration metric: took 405.17516ms for pod "kube-proxy-gjqzx" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.572988   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xggtm" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.764121   24502 request.go:632] Waited for 191.070402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xggtm
	I0910 17:51:30.764196   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xggtm
	I0910 17:51:30.764204   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.764211   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.764219   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.767402   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:30.964422   24502 request.go:632] Waited for 196.366316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:30.964475   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:30.964480   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.964489   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.964496   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.967699   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:30.968152   24502 pod_ready.go:93] pod "kube-proxy-xggtm" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:30.968167   24502 pod_ready.go:82] duration metric: took 395.172639ms for pod "kube-proxy-xggtm" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.968175   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:31.164293   24502 request.go:632] Waited for 196.0607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946
	I0910 17:51:31.164366   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946
	I0910 17:51:31.164375   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.164382   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.164388   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.167528   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:31.364452   24502 request.go:632] Waited for 196.327135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:31.364538   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:31.364549   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.364560   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.364569   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.367389   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:31.367921   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:31.367938   24502 pod_ready.go:82] duration metric: took 399.757768ms for pod "kube-scheduler-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:31.367948   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:31.564114   24502 request.go:632] Waited for 196.105026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m02
	I0910 17:51:31.564170   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m02
	I0910 17:51:31.564176   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.564189   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.564207   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.567246   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:31.764315   24502 request.go:632] Waited for 196.351153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:31.764372   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:31.764377   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.764385   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.764389   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.767357   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:31.767711   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:31.767726   24502 pod_ready.go:82] duration metric: took 399.772816ms for pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:31.767736   24502 pod_ready.go:39] duration metric: took 3.200729041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:51:31.767758   24502 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:51:31.767808   24502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:51:31.784194   24502 api_server.go:72] duration metric: took 17.561653367s to wait for apiserver process to appear ...
	I0910 17:51:31.784214   24502 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:51:31.784234   24502 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0910 17:51:31.789969   24502 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I0910 17:51:31.790039   24502 round_trippers.go:463] GET https://192.168.39.109:8443/version
	I0910 17:51:31.790050   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.790061   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.790070   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.790851   24502 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0910 17:51:31.790982   24502 api_server.go:141] control plane version: v1.31.0
	I0910 17:51:31.791003   24502 api_server.go:131] duration metric: took 6.782084ms to wait for apiserver health ...
	I0910 17:51:31.791020   24502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:51:31.964413   24502 request.go:632] Waited for 173.326677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:31.964477   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:31.964482   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.964489   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.964506   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.969000   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:31.974052   24502 system_pods.go:59] 17 kube-system pods found
	I0910 17:51:31.974080   24502 system_pods.go:61] "coredns-6f6b679f8f-5pv7s" [e75ceddc-7576-45f6-8b80-2071bc7fbef8] Running
	I0910 17:51:31.974084   24502 system_pods.go:61] "coredns-6f6b679f8f-fmcmc" [0d79d296-3ee7-4b7b-8869-e45465da70ff] Running
	I0910 17:51:31.974089   24502 system_pods.go:61] "etcd-ha-558946" [d99a9237-7866-40f1-95d6-c6488183479e] Running
	I0910 17:51:31.974092   24502 system_pods.go:61] "etcd-ha-558946-m02" [d22427c5-1548-4bd2-b1c1-5a6a4353077a] Running
	I0910 17:51:31.974096   24502 system_pods.go:61] "kindnet-n8n67" [019cf933-bf89-485d-a837-bf8bbedbc0df] Running
	I0910 17:51:31.974100   24502 system_pods.go:61] "kindnet-sfr7m" [31ccb06a-6f76-4a18-894c-707993f766e5] Running
	I0910 17:51:31.974103   24502 system_pods.go:61] "kube-apiserver-ha-558946" [74003dbd-903b-48de-b85f-973654d0d58e] Running
	I0910 17:51:31.974106   24502 system_pods.go:61] "kube-apiserver-ha-558946-m02" [9136cd3a-a68e-4167-808d-61b33978cf45] Running
	I0910 17:51:31.974110   24502 system_pods.go:61] "kube-controller-manager-ha-558946" [82453b26-31b3-4c6e-8e37-26eb141923fc] Running
	I0910 17:51:31.974113   24502 system_pods.go:61] "kube-controller-manager-ha-558946-m02" [d658071a-4335-4933-88c8-4d2cfccb40df] Running
	I0910 17:51:31.974116   24502 system_pods.go:61] "kube-proxy-gjqzx" [35a3fe57-a2d6-4134-8205-ce5c8d09b707] Running
	I0910 17:51:31.974120   24502 system_pods.go:61] "kube-proxy-xggtm" [347371e4-83b7-474c-8924-d33c479d736a] Running
	I0910 17:51:31.974123   24502 system_pods.go:61] "kube-scheduler-ha-558946" [e99973ac-5718-4769-99e3-282c3c25b8f8] Running
	I0910 17:51:31.974126   24502 system_pods.go:61] "kube-scheduler-ha-558946-m02" [6c57c232-f86e-417c-b3a6-867b3ed443bf] Running
	I0910 17:51:31.974129   24502 system_pods.go:61] "kube-vip-ha-558946" [810f85ef-6900-456e-877e-095d38286613] Running
	I0910 17:51:31.974132   24502 system_pods.go:61] "kube-vip-ha-558946-m02" [59850a02-4ce3-47dc-a250-f18c0fd9533c] Running
	I0910 17:51:31.974134   24502 system_pods.go:61] "storage-provisioner" [baf5cd7e-5266-4d55-bd6c-459257baa463] Running
	I0910 17:51:31.974141   24502 system_pods.go:74] duration metric: took 183.113705ms to wait for pod list to return data ...
	I0910 17:51:31.974149   24502 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:51:32.164305   24502 request.go:632] Waited for 190.09264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0910 17:51:32.164357   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0910 17:51:32.164362   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:32.164369   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:32.164373   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:32.168172   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:32.168453   24502 default_sa.go:45] found service account: "default"
	I0910 17:51:32.168474   24502 default_sa.go:55] duration metric: took 194.318196ms for default service account to be created ...
	I0910 17:51:32.168484   24502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:51:32.363890   24502 request.go:632] Waited for 195.339749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:32.363950   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:32.363964   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:32.363976   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:32.363985   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:32.367968   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:32.372851   24502 system_pods.go:86] 17 kube-system pods found
	I0910 17:51:32.372873   24502 system_pods.go:89] "coredns-6f6b679f8f-5pv7s" [e75ceddc-7576-45f6-8b80-2071bc7fbef8] Running
	I0910 17:51:32.372878   24502 system_pods.go:89] "coredns-6f6b679f8f-fmcmc" [0d79d296-3ee7-4b7b-8869-e45465da70ff] Running
	I0910 17:51:32.372883   24502 system_pods.go:89] "etcd-ha-558946" [d99a9237-7866-40f1-95d6-c6488183479e] Running
	I0910 17:51:32.372887   24502 system_pods.go:89] "etcd-ha-558946-m02" [d22427c5-1548-4bd2-b1c1-5a6a4353077a] Running
	I0910 17:51:32.372891   24502 system_pods.go:89] "kindnet-n8n67" [019cf933-bf89-485d-a837-bf8bbedbc0df] Running
	I0910 17:51:32.372894   24502 system_pods.go:89] "kindnet-sfr7m" [31ccb06a-6f76-4a18-894c-707993f766e5] Running
	I0910 17:51:32.372898   24502 system_pods.go:89] "kube-apiserver-ha-558946" [74003dbd-903b-48de-b85f-973654d0d58e] Running
	I0910 17:51:32.372901   24502 system_pods.go:89] "kube-apiserver-ha-558946-m02" [9136cd3a-a68e-4167-808d-61b33978cf45] Running
	I0910 17:51:32.372905   24502 system_pods.go:89] "kube-controller-manager-ha-558946" [82453b26-31b3-4c6e-8e37-26eb141923fc] Running
	I0910 17:51:32.372908   24502 system_pods.go:89] "kube-controller-manager-ha-558946-m02" [d658071a-4335-4933-88c8-4d2cfccb40df] Running
	I0910 17:51:32.372911   24502 system_pods.go:89] "kube-proxy-gjqzx" [35a3fe57-a2d6-4134-8205-ce5c8d09b707] Running
	I0910 17:51:32.372915   24502 system_pods.go:89] "kube-proxy-xggtm" [347371e4-83b7-474c-8924-d33c479d736a] Running
	I0910 17:51:32.372918   24502 system_pods.go:89] "kube-scheduler-ha-558946" [e99973ac-5718-4769-99e3-282c3c25b8f8] Running
	I0910 17:51:32.372921   24502 system_pods.go:89] "kube-scheduler-ha-558946-m02" [6c57c232-f86e-417c-b3a6-867b3ed443bf] Running
	I0910 17:51:32.372926   24502 system_pods.go:89] "kube-vip-ha-558946" [810f85ef-6900-456e-877e-095d38286613] Running
	I0910 17:51:32.372932   24502 system_pods.go:89] "kube-vip-ha-558946-m02" [59850a02-4ce3-47dc-a250-f18c0fd9533c] Running
	I0910 17:51:32.372934   24502 system_pods.go:89] "storage-provisioner" [baf5cd7e-5266-4d55-bd6c-459257baa463] Running
	I0910 17:51:32.372940   24502 system_pods.go:126] duration metric: took 204.447248ms to wait for k8s-apps to be running ...
	I0910 17:51:32.372948   24502 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:51:32.372987   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:51:32.387671   24502 system_svc.go:56] duration metric: took 14.714456ms WaitForService to wait for kubelet
	I0910 17:51:32.387696   24502 kubeadm.go:582] duration metric: took 18.165156927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:51:32.387732   24502 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:51:32.564145   24502 request.go:632] Waited for 176.338842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes
	I0910 17:51:32.564204   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes
	I0910 17:51:32.564212   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:32.564220   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:32.564229   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:32.567596   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:32.568287   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:51:32.568308   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:51:32.568338   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:51:32.568348   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:51:32.568355   24502 node_conditions.go:105] duration metric: took 180.614589ms to run NodePressure ...
	I0910 17:51:32.568373   24502 start.go:241] waiting for startup goroutines ...
	I0910 17:51:32.568418   24502 start.go:255] writing updated cluster config ...
	I0910 17:51:32.570407   24502 out.go:201] 
	I0910 17:51:32.571730   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:51:32.571863   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:51:32.573371   24502 out.go:177] * Starting "ha-558946-m03" control-plane node in "ha-558946" cluster
	I0910 17:51:32.574294   24502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:51:32.574313   24502 cache.go:56] Caching tarball of preloaded images
	I0910 17:51:32.574417   24502 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:51:32.574429   24502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:51:32.574521   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:51:32.574746   24502 start.go:360] acquireMachinesLock for ha-558946-m03: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:51:32.574800   24502 start.go:364] duration metric: took 35.284µs to acquireMachinesLock for "ha-558946-m03"
	I0910 17:51:32.574829   24502 start.go:93] Provisioning new machine with config: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:51:32.574942   24502 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0910 17:51:32.576218   24502 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 17:51:32.576317   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:51:32.576351   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:51:32.591822   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0910 17:51:32.592230   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:51:32.592797   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:51:32.592826   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:51:32.593167   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:51:32.593344   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetMachineName
	I0910 17:51:32.593482   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:32.593626   24502 start.go:159] libmachine.API.Create for "ha-558946" (driver="kvm2")
	I0910 17:51:32.593654   24502 client.go:168] LocalClient.Create starting
	I0910 17:51:32.593689   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 17:51:32.593718   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:51:32.593733   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:51:32.593781   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 17:51:32.593799   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:51:32.593809   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:51:32.593828   24502 main.go:141] libmachine: Running pre-create checks...
	I0910 17:51:32.593836   24502 main.go:141] libmachine: (ha-558946-m03) Calling .PreCreateCheck
	I0910 17:51:32.593992   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetConfigRaw
	I0910 17:51:32.594338   24502 main.go:141] libmachine: Creating machine...
	I0910 17:51:32.594353   24502 main.go:141] libmachine: (ha-558946-m03) Calling .Create
	I0910 17:51:32.594486   24502 main.go:141] libmachine: (ha-558946-m03) Creating KVM machine...
	I0910 17:51:32.595809   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found existing default KVM network
	I0910 17:51:32.595945   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found existing private KVM network mk-ha-558946
	I0910 17:51:32.596089   24502 main.go:141] libmachine: (ha-558946-m03) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03 ...
	I0910 17:51:32.596114   24502 main.go:141] libmachine: (ha-558946-m03) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:51:32.596186   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:32.596074   25238 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:51:32.596285   24502 main.go:141] libmachine: (ha-558946-m03) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:51:32.820086   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:32.819982   25238 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa...
	I0910 17:51:32.939951   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:32.939817   25238 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/ha-558946-m03.rawdisk...
	I0910 17:51:32.939979   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Writing magic tar header
	I0910 17:51:32.939989   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Writing SSH key tar header
	I0910 17:51:32.939998   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:32.939949   25238 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03 ...
	I0910 17:51:32.940114   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03
	I0910 17:51:32.940145   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03 (perms=drwx------)
	I0910 17:51:32.940160   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 17:51:32.940179   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:51:32.940196   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 17:51:32.940204   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 17:51:32.940216   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:51:32.940241   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:51:32.940256   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:51:32.940267   24502 main.go:141] libmachine: (ha-558946-m03) Creating domain...
	I0910 17:51:32.940285   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 17:51:32.940299   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:51:32.940315   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:51:32.940327   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home
	I0910 17:51:32.940340   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Skipping /home - not owner
	I0910 17:51:32.941201   24502 main.go:141] libmachine: (ha-558946-m03) define libvirt domain using xml: 
	I0910 17:51:32.941218   24502 main.go:141] libmachine: (ha-558946-m03) <domain type='kvm'>
	I0910 17:51:32.941225   24502 main.go:141] libmachine: (ha-558946-m03)   <name>ha-558946-m03</name>
	I0910 17:51:32.941230   24502 main.go:141] libmachine: (ha-558946-m03)   <memory unit='MiB'>2200</memory>
	I0910 17:51:32.941235   24502 main.go:141] libmachine: (ha-558946-m03)   <vcpu>2</vcpu>
	I0910 17:51:32.941243   24502 main.go:141] libmachine: (ha-558946-m03)   <features>
	I0910 17:51:32.941248   24502 main.go:141] libmachine: (ha-558946-m03)     <acpi/>
	I0910 17:51:32.941253   24502 main.go:141] libmachine: (ha-558946-m03)     <apic/>
	I0910 17:51:32.941257   24502 main.go:141] libmachine: (ha-558946-m03)     <pae/>
	I0910 17:51:32.941262   24502 main.go:141] libmachine: (ha-558946-m03)     
	I0910 17:51:32.941267   24502 main.go:141] libmachine: (ha-558946-m03)   </features>
	I0910 17:51:32.941272   24502 main.go:141] libmachine: (ha-558946-m03)   <cpu mode='host-passthrough'>
	I0910 17:51:32.941277   24502 main.go:141] libmachine: (ha-558946-m03)   
	I0910 17:51:32.941281   24502 main.go:141] libmachine: (ha-558946-m03)   </cpu>
	I0910 17:51:32.941287   24502 main.go:141] libmachine: (ha-558946-m03)   <os>
	I0910 17:51:32.941293   24502 main.go:141] libmachine: (ha-558946-m03)     <type>hvm</type>
	I0910 17:51:32.941298   24502 main.go:141] libmachine: (ha-558946-m03)     <boot dev='cdrom'/>
	I0910 17:51:32.941309   24502 main.go:141] libmachine: (ha-558946-m03)     <boot dev='hd'/>
	I0910 17:51:32.941314   24502 main.go:141] libmachine: (ha-558946-m03)     <bootmenu enable='no'/>
	I0910 17:51:32.941323   24502 main.go:141] libmachine: (ha-558946-m03)   </os>
	I0910 17:51:32.941329   24502 main.go:141] libmachine: (ha-558946-m03)   <devices>
	I0910 17:51:32.941341   24502 main.go:141] libmachine: (ha-558946-m03)     <disk type='file' device='cdrom'>
	I0910 17:51:32.941426   24502 main.go:141] libmachine: (ha-558946-m03)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/boot2docker.iso'/>
	I0910 17:51:32.941455   24502 main.go:141] libmachine: (ha-558946-m03)       <target dev='hdc' bus='scsi'/>
	I0910 17:51:32.941471   24502 main.go:141] libmachine: (ha-558946-m03)       <readonly/>
	I0910 17:51:32.941484   24502 main.go:141] libmachine: (ha-558946-m03)     </disk>
	I0910 17:51:32.941495   24502 main.go:141] libmachine: (ha-558946-m03)     <disk type='file' device='disk'>
	I0910 17:51:32.941506   24502 main.go:141] libmachine: (ha-558946-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:51:32.941521   24502 main.go:141] libmachine: (ha-558946-m03)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/ha-558946-m03.rawdisk'/>
	I0910 17:51:32.941533   24502 main.go:141] libmachine: (ha-558946-m03)       <target dev='hda' bus='virtio'/>
	I0910 17:51:32.941543   24502 main.go:141] libmachine: (ha-558946-m03)     </disk>
	I0910 17:51:32.941558   24502 main.go:141] libmachine: (ha-558946-m03)     <interface type='network'>
	I0910 17:51:32.941570   24502 main.go:141] libmachine: (ha-558946-m03)       <source network='mk-ha-558946'/>
	I0910 17:51:32.941579   24502 main.go:141] libmachine: (ha-558946-m03)       <model type='virtio'/>
	I0910 17:51:32.941588   24502 main.go:141] libmachine: (ha-558946-m03)     </interface>
	I0910 17:51:32.941597   24502 main.go:141] libmachine: (ha-558946-m03)     <interface type='network'>
	I0910 17:51:32.941611   24502 main.go:141] libmachine: (ha-558946-m03)       <source network='default'/>
	I0910 17:51:32.941619   24502 main.go:141] libmachine: (ha-558946-m03)       <model type='virtio'/>
	I0910 17:51:32.941643   24502 main.go:141] libmachine: (ha-558946-m03)     </interface>
	I0910 17:51:32.941670   24502 main.go:141] libmachine: (ha-558946-m03)     <serial type='pty'>
	I0910 17:51:32.941685   24502 main.go:141] libmachine: (ha-558946-m03)       <target port='0'/>
	I0910 17:51:32.941698   24502 main.go:141] libmachine: (ha-558946-m03)     </serial>
	I0910 17:51:32.941708   24502 main.go:141] libmachine: (ha-558946-m03)     <console type='pty'>
	I0910 17:51:32.941720   24502 main.go:141] libmachine: (ha-558946-m03)       <target type='serial' port='0'/>
	I0910 17:51:32.941731   24502 main.go:141] libmachine: (ha-558946-m03)     </console>
	I0910 17:51:32.941742   24502 main.go:141] libmachine: (ha-558946-m03)     <rng model='virtio'>
	I0910 17:51:32.941754   24502 main.go:141] libmachine: (ha-558946-m03)       <backend model='random'>/dev/random</backend>
	I0910 17:51:32.941763   24502 main.go:141] libmachine: (ha-558946-m03)     </rng>
	I0910 17:51:32.941783   24502 main.go:141] libmachine: (ha-558946-m03)     
	I0910 17:51:32.941800   24502 main.go:141] libmachine: (ha-558946-m03)     
	I0910 17:51:32.941818   24502 main.go:141] libmachine: (ha-558946-m03)   </devices>
	I0910 17:51:32.941889   24502 main.go:141] libmachine: (ha-558946-m03) </domain>
	I0910 17:51:32.941910   24502 main.go:141] libmachine: (ha-558946-m03) 
	I0910 17:51:32.948606   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:df:e2:10 in network default
	I0910 17:51:32.949137   24502 main.go:141] libmachine: (ha-558946-m03) Ensuring networks are active...
	I0910 17:51:32.949162   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:32.949767   24502 main.go:141] libmachine: (ha-558946-m03) Ensuring network default is active
	I0910 17:51:32.950076   24502 main.go:141] libmachine: (ha-558946-m03) Ensuring network mk-ha-558946 is active
	I0910 17:51:32.950451   24502 main.go:141] libmachine: (ha-558946-m03) Getting domain xml...
	I0910 17:51:32.951130   24502 main.go:141] libmachine: (ha-558946-m03) Creating domain...
	I0910 17:51:34.160910   24502 main.go:141] libmachine: (ha-558946-m03) Waiting to get IP...
	I0910 17:51:34.161915   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:34.162337   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:34.162365   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:34.162301   25238 retry.go:31] will retry after 192.308586ms: waiting for machine to come up
	I0910 17:51:34.356851   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:34.357348   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:34.357374   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:34.357310   25238 retry.go:31] will retry after 235.950538ms: waiting for machine to come up
	I0910 17:51:34.594621   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:34.595181   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:34.595210   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:34.595135   25238 retry.go:31] will retry after 319.216711ms: waiting for machine to come up
	I0910 17:51:34.915429   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:34.915849   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:34.915875   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:34.915823   25238 retry.go:31] will retry after 437.191559ms: waiting for machine to come up
	I0910 17:51:35.354134   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:35.354569   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:35.354596   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:35.354518   25238 retry.go:31] will retry after 527.344491ms: waiting for machine to come up
	I0910 17:51:35.883063   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:35.883454   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:35.883478   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:35.883416   25238 retry.go:31] will retry after 887.020425ms: waiting for machine to come up
	I0910 17:51:36.771488   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:36.771891   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:36.771913   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:36.771846   25238 retry.go:31] will retry after 747.567374ms: waiting for machine to come up
	I0910 17:51:37.520868   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:37.521285   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:37.521312   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:37.521233   25238 retry.go:31] will retry after 1.2299808s: waiting for machine to come up
	I0910 17:51:38.752317   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:38.752751   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:38.752770   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:38.752716   25238 retry.go:31] will retry after 1.636100072s: waiting for machine to come up
	I0910 17:51:40.391631   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:40.392063   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:40.392115   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:40.392040   25238 retry.go:31] will retry after 1.90887496s: waiting for machine to come up
	I0910 17:51:42.302712   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:42.303213   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:42.303247   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:42.303157   25238 retry.go:31] will retry after 2.44749237s: waiting for machine to come up
	I0910 17:51:44.751762   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:44.752142   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:44.752166   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:44.752104   25238 retry.go:31] will retry after 3.502593835s: waiting for machine to come up
	I0910 17:51:48.255721   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:48.256171   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:48.256197   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:48.256133   25238 retry.go:31] will retry after 3.604327927s: waiting for machine to come up
	I0910 17:51:51.864806   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:51.865324   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:51.865344   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:51.865291   25238 retry.go:31] will retry after 4.848421616s: waiting for machine to come up
	I0910 17:51:56.718037   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.718456   24502 main.go:141] libmachine: (ha-558946-m03) Found IP for machine: 192.168.39.241
	I0910 17:51:56.718475   24502 main.go:141] libmachine: (ha-558946-m03) Reserving static IP address...
	I0910 17:51:56.718485   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has current primary IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.718869   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find host DHCP lease matching {name: "ha-558946-m03", mac: "52:54:00:fd:d7:43", ip: "192.168.39.241"} in network mk-ha-558946
	I0910 17:51:56.788379   24502 main.go:141] libmachine: (ha-558946-m03) Reserved static IP address: 192.168.39.241
	I0910 17:51:56.788403   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Getting to WaitForSSH function...
	I0910 17:51:56.788411   24502 main.go:141] libmachine: (ha-558946-m03) Waiting for SSH to be available...
	I0910 17:51:56.790972   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.791496   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:56.791532   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.791559   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Using SSH client type: external
	I0910 17:51:56.791591   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa (-rw-------)
	I0910 17:51:56.791642   24502 main.go:141] libmachine: (ha-558946-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:51:56.791663   24502 main.go:141] libmachine: (ha-558946-m03) DBG | About to run SSH command:
	I0910 17:51:56.791687   24502 main.go:141] libmachine: (ha-558946-m03) DBG | exit 0
	I0910 17:51:56.921130   24502 main.go:141] libmachine: (ha-558946-m03) DBG | SSH cmd err, output: <nil>: 
	I0910 17:51:56.921382   24502 main.go:141] libmachine: (ha-558946-m03) KVM machine creation complete!
	I0910 17:51:56.921750   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetConfigRaw
	I0910 17:51:56.922281   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:56.922458   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:56.922628   24502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:51:56.922649   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:51:56.923876   24502 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:51:56.923893   24502 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:51:56.923902   24502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:51:56.923908   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:56.926213   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.926562   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:56.926584   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.926721   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:56.926869   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:56.927000   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:56.927111   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:56.927251   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:56.927437   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:56.927447   24502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:51:57.040456   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:51:57.040479   24502 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:51:57.040486   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.042980   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.043358   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.043385   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.043538   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.043731   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.043885   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.044034   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.044200   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:57.044384   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:57.044402   24502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:51:57.161681   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:51:57.161753   24502 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:51:57.161766   24502 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:51:57.161779   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetMachineName
	I0910 17:51:57.161989   24502 buildroot.go:166] provisioning hostname "ha-558946-m03"
	I0910 17:51:57.162010   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetMachineName
	I0910 17:51:57.162197   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.164708   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.165128   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.165150   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.165316   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.165500   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.165653   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.165781   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.165915   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:57.166103   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:57.166116   24502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-558946-m03 && echo "ha-558946-m03" | sudo tee /etc/hostname
	I0910 17:51:57.292370   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946-m03
	
	I0910 17:51:57.292397   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.294960   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.295384   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.295430   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.295562   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.295768   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.295939   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.296076   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.296241   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:57.296454   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:57.296481   24502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-558946-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-558946-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-558946-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:51:57.422005   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:51:57.422046   24502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:51:57.422068   24502 buildroot.go:174] setting up certificates
	I0910 17:51:57.422079   24502 provision.go:84] configureAuth start
	I0910 17:51:57.422089   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetMachineName
	I0910 17:51:57.422379   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:51:57.424806   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.425146   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.425171   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.425367   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.427893   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.428271   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.428306   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.428471   24502 provision.go:143] copyHostCerts
	I0910 17:51:57.428495   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:51:57.428528   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 17:51:57.428541   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:51:57.428611   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:51:57.428707   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:51:57.428732   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 17:51:57.428741   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:51:57.428777   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:51:57.428861   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:51:57.428885   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 17:51:57.428895   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:51:57.428931   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:51:57.429001   24502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.ha-558946-m03 san=[127.0.0.1 192.168.39.241 ha-558946-m03 localhost minikube]
	I0910 17:51:57.596497   24502 provision.go:177] copyRemoteCerts
	I0910 17:51:57.596547   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:51:57.596566   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.599135   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.599560   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.599583   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.599719   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.599894   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.600029   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.600170   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:51:57.687797   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 17:51:57.687871   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 17:51:57.712766   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 17:51:57.712822   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:51:57.737408   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 17:51:57.737466   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:51:57.760783   24502 provision.go:87] duration metric: took 338.691491ms to configureAuth
	I0910 17:51:57.760805   24502 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:51:57.760984   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:51:57.761060   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.763601   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.763970   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.763992   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.764142   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.764346   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.764486   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.764602   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.764713   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:57.764878   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:57.764898   24502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 17:51:57.999720   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 17:51:57.999745   24502 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:51:57.999762   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetURL
	I0910 17:51:58.000979   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Using libvirt version 6000000
	I0910 17:51:58.003464   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.003888   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.003928   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.004119   24502 main.go:141] libmachine: Docker is up and running!
	I0910 17:51:58.004137   24502 main.go:141] libmachine: Reticulating splines...
	I0910 17:51:58.004145   24502 client.go:171] duration metric: took 25.410481159s to LocalClient.Create
	I0910 17:51:58.004172   24502 start.go:167] duration metric: took 25.410545503s to libmachine.API.Create "ha-558946"
	I0910 17:51:58.004190   24502 start.go:293] postStartSetup for "ha-558946-m03" (driver="kvm2")
	I0910 17:51:58.004211   24502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:51:58.004234   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.004511   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:51:58.004537   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:58.006411   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.006739   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.006768   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.006862   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:58.007035   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.007160   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:58.007301   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:51:58.095627   24502 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:51:58.099727   24502 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:51:58.099754   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 17:51:58.099810   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 17:51:58.099881   24502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 17:51:58.099891   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 17:51:58.099966   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 17:51:58.109618   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:51:58.133163   24502 start.go:296] duration metric: took 128.957838ms for postStartSetup
	I0910 17:51:58.133210   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetConfigRaw
	I0910 17:51:58.133785   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:51:58.136248   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.136611   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.136641   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.136851   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:51:58.137157   24502 start.go:128] duration metric: took 25.562199754s to createHost
	I0910 17:51:58.137186   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:58.139516   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.139865   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.139899   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.140140   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:58.140342   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.140525   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.140683   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:58.140842   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:58.140998   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:58.141008   24502 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:51:58.253804   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725990718.231347140
	
	I0910 17:51:58.253823   24502 fix.go:216] guest clock: 1725990718.231347140
	I0910 17:51:58.253834   24502 fix.go:229] Guest: 2024-09-10 17:51:58.23134714 +0000 UTC Remote: 2024-09-10 17:51:58.137174583 +0000 UTC m=+139.083003788 (delta=94.172557ms)
	I0910 17:51:58.253858   24502 fix.go:200] guest clock delta is within tolerance: 94.172557ms
	I0910 17:51:58.253864   24502 start.go:83] releasing machines lock for "ha-558946-m03", held for 25.679053483s
	I0910 17:51:58.253889   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.254123   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:51:58.256697   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.257037   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.257062   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.259134   24502 out.go:177] * Found network options:
	I0910 17:51:58.260296   24502 out.go:177]   - NO_PROXY=192.168.39.109,192.168.39.96
	W0910 17:51:58.261397   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 17:51:58.261420   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 17:51:58.261431   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.261924   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.262083   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.262168   24502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:51:58.262195   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	W0910 17:51:58.262238   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 17:51:58.262266   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 17:51:58.262311   24502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 17:51:58.262324   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:58.264520   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.264679   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.264904   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.264930   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.265007   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.265027   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:58.265042   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.265219   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:58.265248   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.265370   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.265415   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:58.265517   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:58.265603   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:51:58.265673   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:51:58.503190   24502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:51:58.509913   24502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:51:58.509959   24502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:51:58.525557   24502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:51:58.525575   24502 start.go:495] detecting cgroup driver to use...
	I0910 17:51:58.525631   24502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:51:58.542691   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:51:58.555829   24502 docker.go:217] disabling cri-docker service (if available) ...
	I0910 17:51:58.555882   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 17:51:58.570042   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 17:51:58.583566   24502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 17:51:58.703410   24502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 17:51:58.867516   24502 docker.go:233] disabling docker service ...
	I0910 17:51:58.867584   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 17:51:58.882603   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 17:51:58.895218   24502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 17:51:59.015401   24502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 17:51:59.134538   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 17:51:59.149032   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:51:59.172805   24502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 17:51:59.172856   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.183466   24502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 17:51:59.183520   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.194762   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.205912   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.216137   24502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:51:59.226500   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.236758   24502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.255980   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.266336   24502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:51:59.275439   24502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 17:51:59.275486   24502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 17:51:59.287960   24502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:51:59.297846   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:51:59.420894   24502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 17:51:59.510574   24502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 17:51:59.510644   24502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 17:51:59.515408   24502 start.go:563] Will wait 60s for crictl version
	I0910 17:51:59.515462   24502 ssh_runner.go:195] Run: which crictl
	I0910 17:51:59.519396   24502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:51:59.558427   24502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 17:51:59.558497   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:51:59.586281   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:51:59.615120   24502 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 17:51:59.616369   24502 out.go:177]   - env NO_PROXY=192.168.39.109
	I0910 17:51:59.617492   24502 out.go:177]   - env NO_PROXY=192.168.39.109,192.168.39.96
	I0910 17:51:59.618574   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:51:59.621409   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:59.621788   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:59.621810   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:59.622001   24502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:51:59.626206   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:51:59.638300   24502 mustload.go:65] Loading cluster: ha-558946
	I0910 17:51:59.638504   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:51:59.638790   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:51:59.638832   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:51:59.653007   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0910 17:51:59.653354   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:51:59.653761   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:51:59.653780   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:51:59.654069   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:51:59.654228   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:51:59.655581   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:51:59.655844   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:51:59.655877   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:51:59.670783   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35023
	I0910 17:51:59.671114   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:51:59.671551   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:51:59.671573   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:51:59.671839   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:51:59.671995   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:51:59.672143   24502 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946 for IP: 192.168.39.241
	I0910 17:51:59.672156   24502 certs.go:194] generating shared ca certs ...
	I0910 17:51:59.672172   24502 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:51:59.672306   24502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 17:51:59.672362   24502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 17:51:59.672374   24502 certs.go:256] generating profile certs ...
	I0910 17:51:59.672472   24502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key
	I0910 17:51:59.672502   24502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.f8e1ed16
	I0910 17:51:59.672523   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.f8e1ed16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109 192.168.39.96 192.168.39.241 192.168.39.254]
	I0910 17:51:59.891804   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.f8e1ed16 ...
	I0910 17:51:59.891836   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.f8e1ed16: {Name:mkb1c81fb5736388426a997b999622f9986ab5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:51:59.892015   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.f8e1ed16 ...
	I0910 17:51:59.892028   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.f8e1ed16: {Name:mk0687690f8f2aa206b5e80a94279c0dd61cb82a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:51:59.892109   24502 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.f8e1ed16 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt
	I0910 17:51:59.892264   24502 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.f8e1ed16 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key
	I0910 17:51:59.892398   24502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key
	I0910 17:51:59.892416   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 17:51:59.892431   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 17:51:59.892446   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 17:51:59.892463   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 17:51:59.892478   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 17:51:59.892493   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 17:51:59.892510   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 17:51:59.892524   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 17:51:59.892580   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 17:51:59.892610   24502 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 17:51:59.892620   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 17:51:59.892645   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:51:59.892669   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:51:59.892694   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 17:51:59.892737   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:51:59.892766   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:51:59.892782   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 17:51:59.892797   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 17:51:59.892831   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:51:59.895532   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:51:59.895879   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:51:59.895910   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:51:59.896038   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:51:59.896247   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:51:59.896385   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:51:59.896537   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:51:59.973391   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0910 17:51:59.979528   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0910 17:51:59.990634   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0910 17:51:59.994678   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0910 17:52:00.005371   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0910 17:52:00.009448   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0910 17:52:00.019328   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0910 17:52:00.023713   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0910 17:52:00.036607   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0910 17:52:00.040895   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0910 17:52:00.050615   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0910 17:52:00.054497   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0910 17:52:00.065961   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:52:00.090823   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:52:00.113838   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:52:00.136178   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 17:52:00.160954   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0910 17:52:00.184003   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 17:52:00.208085   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:52:00.232608   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:52:00.256893   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:52:00.281461   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 17:52:00.317710   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 17:52:00.343444   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0910 17:52:00.361375   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0910 17:52:00.378824   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0910 17:52:00.396074   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0910 17:52:00.413679   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0910 17:52:00.430615   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0910 17:52:00.446442   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0910 17:52:00.461538   24502 ssh_runner.go:195] Run: openssl version
	I0910 17:52:00.466958   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 17:52:00.476968   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 17:52:00.481150   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 17:52:00.481199   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 17:52:00.486724   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 17:52:00.496907   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 17:52:00.508954   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 17:52:00.513405   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 17:52:00.513447   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 17:52:00.518808   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 17:52:00.529379   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:52:00.539926   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:52:00.544356   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:52:00.544397   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:52:00.550049   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:52:00.560627   24502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:52:00.564698   24502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:52:00.564740   24502 kubeadm.go:934] updating node {m03 192.168.39.241 8443 v1.31.0 crio true true} ...
	I0910 17:52:00.564830   24502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-558946-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:52:00.564865   24502 kube-vip.go:115] generating kube-vip config ...
	I0910 17:52:00.564894   24502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 17:52:00.581257   24502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 17:52:00.581314   24502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 17:52:00.581377   24502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:52:00.591599   24502 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 17:52:00.591645   24502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 17:52:00.601750   24502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0910 17:52:00.601767   24502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0910 17:52:00.601773   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 17:52:00.601784   24502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0910 17:52:00.601800   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:52:00.601835   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 17:52:00.601803   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 17:52:00.601913   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 17:52:00.607662   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 17:52:00.607686   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 17:52:00.644344   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 17:52:00.644440   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 17:52:00.644476   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 17:52:00.644450   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 17:52:00.682263   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 17:52:00.682297   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 17:52:01.415846   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0910 17:52:01.425508   24502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0910 17:52:01.442679   24502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:52:01.459730   24502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 17:52:01.476421   24502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0910 17:52:01.480443   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:52:01.492640   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:52:01.614523   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:52:01.631884   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:52:01.632293   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:52:01.632344   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:52:01.647603   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0910 17:52:01.648052   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:52:01.648511   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:52:01.648531   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:52:01.648890   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:52:01.649067   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:52:01.649248   24502 start.go:317] joinCluster: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:52:01.649397   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 17:52:01.649422   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:52:01.652329   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:52:01.652829   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:52:01.652865   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:52:01.652992   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:52:01.653161   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:52:01.653298   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:52:01.653431   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:52:01.808761   24502 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:52:01.808812   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token up6txw.daqk8dai2qrj9189 --discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-558946-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443"
	I0910 17:52:24.318347   24502 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token up6txw.daqk8dai2qrj9189 --discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-558946-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443": (22.509504377s)
	I0910 17:52:24.318394   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 17:52:25.036729   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-558946-m03 minikube.k8s.io/updated_at=2024_09_10T17_52_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-558946 minikube.k8s.io/primary=false
	I0910 17:52:25.173195   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-558946-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0910 17:52:25.320357   24502 start.go:319] duration metric: took 23.67110398s to joinCluster
	I0910 17:52:25.320462   24502 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:52:25.320813   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:52:25.321562   24502 out.go:177] * Verifying Kubernetes components...
	I0910 17:52:25.322687   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:52:25.604662   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:52:25.677954   24502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:52:25.678279   24502 kapi.go:59] client config for ha-558946: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt", KeyFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key", CAFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2c360), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0910 17:52:25.678372   24502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.109:8443
	I0910 17:52:25.678726   24502 node_ready.go:35] waiting up to 6m0s for node "ha-558946-m03" to be "Ready" ...
	I0910 17:52:25.678834   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:25.678846   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:25.678859   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:25.678866   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:25.683264   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:52:26.179499   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:26.179516   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:26.179523   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:26.179526   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:26.183497   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:26.679503   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:26.679530   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:26.679542   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:26.679547   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:26.683176   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:27.179605   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:27.179630   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:27.179642   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:27.179646   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:27.182902   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:27.679414   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:27.679449   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:27.679460   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:27.679465   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:27.682139   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:27.682838   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:28.179114   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:28.179150   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:28.179159   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:28.179163   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:28.182339   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:28.679119   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:28.679141   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:28.679150   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:28.679154   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:28.683394   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:52:29.179684   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:29.179710   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:29.179721   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:29.179726   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:29.183442   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:29.679039   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:29.679059   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:29.679069   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:29.679075   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:29.681958   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:30.179860   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:30.179882   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:30.179891   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:30.179896   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:30.183936   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:52:30.184669   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:30.678973   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:30.678995   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:30.679004   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:30.679008   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:30.681651   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:31.179618   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:31.179637   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:31.179645   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:31.179649   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:31.182874   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:31.679712   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:31.679735   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:31.679743   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:31.679747   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:31.682917   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:32.179064   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:32.179083   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:32.179091   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:32.179094   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:32.181772   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:32.679179   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:32.679205   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:32.679216   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:32.679220   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:32.682216   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:32.682910   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:33.179832   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:33.179853   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:33.179864   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:33.179870   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:33.183368   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:33.679165   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:33.679186   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:33.679196   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:33.679200   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:33.682365   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:34.179457   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:34.179478   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:34.179486   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:34.179490   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:34.183209   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:34.679139   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:34.679158   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:34.679172   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:34.679180   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:34.682351   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:35.178948   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:35.178982   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:35.178991   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:35.178996   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:35.182175   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:35.182899   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:35.679534   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:35.679557   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:35.679568   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:35.679577   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:35.682491   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:36.179774   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:36.179805   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:36.179819   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:36.179825   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:36.183027   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:36.679808   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:36.679830   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:36.679837   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:36.679841   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:36.682433   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:37.179662   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:37.179681   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:37.179690   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:37.179694   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:37.183057   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:37.183575   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:37.679434   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:37.679463   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:37.679474   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:37.679482   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:37.683136   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:38.179047   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:38.179074   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:38.179084   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:38.179092   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:38.182641   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:38.679637   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:38.679659   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:38.679668   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:38.679677   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:38.682391   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:39.179642   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:39.179663   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:39.179674   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:39.179681   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:39.182807   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:39.678974   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:39.678994   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:39.679006   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:39.679012   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:39.682452   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:39.683029   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:40.179028   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:40.179060   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.179068   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.179072   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.182089   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:40.679084   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:40.679109   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.679121   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.679127   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.682558   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:40.683011   24502 node_ready.go:49] node "ha-558946-m03" has status "Ready":"True"
	I0910 17:52:40.683025   24502 node_ready.go:38] duration metric: took 15.004282888s for node "ha-558946-m03" to be "Ready" ...
	I0910 17:52:40.683033   24502 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:52:40.683084   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:40.683093   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.683100   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.683103   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.688627   24502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 17:52:40.695199   24502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.695270   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-5pv7s
	I0910 17:52:40.695278   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.695285   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.695290   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.698284   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.698929   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:40.698945   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.698955   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.698959   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.701757   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.702238   24502 pod_ready.go:93] pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:40.702262   24502 pod_ready.go:82] duration metric: took 7.044635ms for pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.702272   24502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.702329   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fmcmc
	I0910 17:52:40.702339   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.702350   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.702357   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.704642   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.705371   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:40.705389   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.705398   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.705403   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.708139   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.709730   24502 pod_ready.go:93] pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:40.709746   24502 pod_ready.go:82] duration metric: took 7.467139ms for pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.709754   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.709794   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946
	I0910 17:52:40.709802   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.709811   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.709817   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.711887   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.712429   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:40.712443   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.712450   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.712455   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.714656   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.715226   24502 pod_ready.go:93] pod "etcd-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:40.715243   24502 pod_ready.go:82] duration metric: took 5.48298ms for pod "etcd-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.715253   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.715309   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946-m02
	I0910 17:52:40.715320   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.715329   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.715338   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.718089   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.718540   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:40.718553   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.718560   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.718563   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.720665   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.721039   24502 pod_ready.go:93] pod "etcd-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:40.721052   24502 pod_ready.go:82] duration metric: took 5.792309ms for pod "etcd-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.721062   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.879457   24502 request.go:632] Waited for 158.329186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946-m03
	I0910 17:52:40.879530   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946-m03
	I0910 17:52:40.879536   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.879544   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.879548   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.883322   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:41.079300   24502 request.go:632] Waited for 195.201107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:41.079364   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:41.079373   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.079382   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.079390   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.082201   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:41.082797   24502 pod_ready.go:93] pod "etcd-ha-558946-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:41.082817   24502 pod_ready.go:82] duration metric: took 361.747825ms for pod "etcd-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.082832   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.279076   24502 request.go:632] Waited for 196.180193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946
	I0910 17:52:41.279155   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946
	I0910 17:52:41.279160   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.279168   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.279172   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.282454   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:41.479328   24502 request.go:632] Waited for 196.33062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:41.479394   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:41.479401   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.479408   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.479415   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.482038   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:41.482626   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:41.482643   24502 pod_ready.go:82] duration metric: took 399.802605ms for pod "kube-apiserver-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.482656   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.679268   24502 request.go:632] Waited for 196.544015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m02
	I0910 17:52:41.679341   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m02
	I0910 17:52:41.679349   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.679359   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.679364   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.682512   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:41.879708   24502 request.go:632] Waited for 196.352723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:41.879758   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:41.879763   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.879769   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.879778   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.884152   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:52:41.884799   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:41.884816   24502 pod_ready.go:82] duration metric: took 402.153066ms for pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.884826   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.079984   24502 request.go:632] Waited for 195.073226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m03
	I0910 17:52:42.080046   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m03
	I0910 17:52:42.080053   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.080064   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.080074   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.083799   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:42.279999   24502 request.go:632] Waited for 195.304421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:42.280051   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:42.280058   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.280075   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.280097   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.283357   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:42.283965   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:42.283988   24502 pod_ready.go:82] duration metric: took 399.149137ms for pod "kube-apiserver-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.284004   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.480060   24502 request.go:632] Waited for 195.968031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946
	I0910 17:52:42.480174   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946
	I0910 17:52:42.480200   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.480214   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.480223   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.483063   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:42.680049   24502 request.go:632] Waited for 196.316999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:42.680132   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:42.680140   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.680149   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.680158   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.683053   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:42.683683   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:42.683699   24502 pod_ready.go:82] duration metric: took 399.684285ms for pod "kube-controller-manager-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.683708   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.879761   24502 request.go:632] Waited for 195.98885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m02
	I0910 17:52:42.879824   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m02
	I0910 17:52:42.879832   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.879843   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.879850   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.882761   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:43.079873   24502 request.go:632] Waited for 196.353903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:43.079928   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:43.079933   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.079940   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.079944   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.083556   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.084101   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:43.084123   24502 pod_ready.go:82] duration metric: took 400.407652ms for pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.084137   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.279096   24502 request.go:632] Waited for 194.891277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m03
	I0910 17:52:43.279156   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m03
	I0910 17:52:43.279162   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.279172   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.279179   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.282580   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.480049   24502 request.go:632] Waited for 196.363721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:43.480172   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:43.480181   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.480201   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.480209   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.483483   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.484019   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:43.484040   24502 pod_ready.go:82] duration metric: took 399.893727ms for pod "kube-controller-manager-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.484054   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8ldlx" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.680052   24502 request.go:632] Waited for 195.928284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8ldlx
	I0910 17:52:43.680147   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8ldlx
	I0910 17:52:43.680158   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.680169   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.680180   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.683455   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.879703   24502 request.go:632] Waited for 195.367895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:43.879753   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:43.879759   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.879769   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.879776   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.883182   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.883787   24502 pod_ready.go:93] pod "kube-proxy-8ldlx" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:43.883808   24502 pod_ready.go:82] duration metric: took 399.744881ms for pod "kube-proxy-8ldlx" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.883822   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gjqzx" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.079927   24502 request.go:632] Waited for 196.04605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjqzx
	I0910 17:52:44.079986   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjqzx
	I0910 17:52:44.079993   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.080006   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.080014   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.083263   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:44.279541   24502 request.go:632] Waited for 195.588211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:44.279608   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:44.279613   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.279621   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.279627   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.283206   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:44.284080   24502 pod_ready.go:93] pod "kube-proxy-gjqzx" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:44.284100   24502 pod_ready.go:82] duration metric: took 400.270829ms for pod "kube-proxy-gjqzx" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.284110   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xggtm" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.479085   24502 request.go:632] Waited for 194.915942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xggtm
	I0910 17:52:44.479149   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xggtm
	I0910 17:52:44.479154   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.479161   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.479168   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.483057   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:44.679214   24502 request.go:632] Waited for 195.228306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:44.679274   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:44.679281   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.679290   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.679305   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.687270   24502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 17:52:44.688060   24502 pod_ready.go:93] pod "kube-proxy-xggtm" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:44.688076   24502 pod_ready.go:82] duration metric: took 403.961027ms for pod "kube-proxy-xggtm" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.688085   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.880028   24502 request.go:632] Waited for 191.881814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946
	I0910 17:52:44.880103   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946
	I0910 17:52:44.880109   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.880117   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.880121   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.883793   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.080077   24502 request.go:632] Waited for 195.339736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:45.080123   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:45.080127   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.080134   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.080138   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.083879   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.084486   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:45.084502   24502 pod_ready.go:82] duration metric: took 396.410407ms for pod "kube-scheduler-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.084512   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.279567   24502 request.go:632] Waited for 194.994058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m02
	I0910 17:52:45.279641   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m02
	I0910 17:52:45.279651   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.279658   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.279665   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.282904   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.479741   24502 request.go:632] Waited for 196.217693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:45.479821   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:45.479831   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.479842   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.479848   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.483127   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.483766   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:45.483787   24502 pod_ready.go:82] duration metric: took 399.26798ms for pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.483800   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.679766   24502 request.go:632] Waited for 195.896259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m03
	I0910 17:52:45.679837   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m03
	I0910 17:52:45.679848   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.679859   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.679869   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.682853   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:45.879886   24502 request.go:632] Waited for 196.363607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:45.879966   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:45.879974   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.879982   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.879988   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.883181   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.883825   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:45.883841   24502 pod_ready.go:82] duration metric: took 400.030658ms for pod "kube-scheduler-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.883851   24502 pod_ready.go:39] duration metric: took 5.20080914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:52:45.883866   24502 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:52:45.883921   24502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:52:45.900126   24502 api_server.go:72] duration metric: took 20.579632142s to wait for apiserver process to appear ...
	I0910 17:52:45.900147   24502 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:52:45.900170   24502 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0910 17:52:45.904231   24502 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I0910 17:52:45.904284   24502 round_trippers.go:463] GET https://192.168.39.109:8443/version
	I0910 17:52:45.904289   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.904295   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.904302   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.905085   24502 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0910 17:52:45.905134   24502 api_server.go:141] control plane version: v1.31.0
	I0910 17:52:45.905147   24502 api_server.go:131] duration metric: took 4.993418ms to wait for apiserver health ...
	I0910 17:52:45.905153   24502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:52:46.079501   24502 request.go:632] Waited for 174.288817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:46.079566   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:46.079572   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:46.079581   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:46.079588   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:46.085235   24502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 17:52:46.091584   24502 system_pods.go:59] 24 kube-system pods found
	I0910 17:52:46.091608   24502 system_pods.go:61] "coredns-6f6b679f8f-5pv7s" [e75ceddc-7576-45f6-8b80-2071bc7fbef8] Running
	I0910 17:52:46.091613   24502 system_pods.go:61] "coredns-6f6b679f8f-fmcmc" [0d79d296-3ee7-4b7b-8869-e45465da70ff] Running
	I0910 17:52:46.091617   24502 system_pods.go:61] "etcd-ha-558946" [d99a9237-7866-40f1-95d6-c6488183479e] Running
	I0910 17:52:46.091621   24502 system_pods.go:61] "etcd-ha-558946-m02" [d22427c5-1548-4bd2-b1c1-5a6a4353077a] Running
	I0910 17:52:46.091625   24502 system_pods.go:61] "etcd-ha-558946-m03" [6d01b402-952c-428d-be87-e461cc07de36] Running
	I0910 17:52:46.091629   24502 system_pods.go:61] "kindnet-mshf2" [cec27b40-9e1f-4c27-9d18-422e75dbc252] Running
	I0910 17:52:46.091635   24502 system_pods.go:61] "kindnet-n8n67" [019cf933-bf89-485d-a837-bf8bbedbc0df] Running
	I0910 17:52:46.091639   24502 system_pods.go:61] "kindnet-sfr7m" [31ccb06a-6f76-4a18-894c-707993f766e5] Running
	I0910 17:52:46.091643   24502 system_pods.go:61] "kube-apiserver-ha-558946" [74003dbd-903b-48de-b85f-973654d0d58e] Running
	I0910 17:52:46.091647   24502 system_pods.go:61] "kube-apiserver-ha-558946-m02" [9136cd3a-a68e-4167-808d-61b33978cf45] Running
	I0910 17:52:46.091650   24502 system_pods.go:61] "kube-apiserver-ha-558946-m03" [ee0b10ae-52c5-4bb9-8eb2-b9921279eab7] Running
	I0910 17:52:46.091654   24502 system_pods.go:61] "kube-controller-manager-ha-558946" [82453b26-31b3-4c6e-8e37-26eb141923fc] Running
	I0910 17:52:46.091659   24502 system_pods.go:61] "kube-controller-manager-ha-558946-m02" [d658071a-4335-4933-88c8-4d2cfccb40df] Running
	I0910 17:52:46.091663   24502 system_pods.go:61] "kube-controller-manager-ha-558946-m03" [935f6235-0c9e-4204-b1ca-c75b2e0946b8] Running
	I0910 17:52:46.091668   24502 system_pods.go:61] "kube-proxy-8ldlx" [a5c5acdd-77fe-432b-80a1-34fd11389f6e] Running
	I0910 17:52:46.091671   24502 system_pods.go:61] "kube-proxy-gjqzx" [35a3fe57-a2d6-4134-8205-ce5c8d09b707] Running
	I0910 17:52:46.091675   24502 system_pods.go:61] "kube-proxy-xggtm" [347371e4-83b7-474c-8924-d33c479d736a] Running
	I0910 17:52:46.091678   24502 system_pods.go:61] "kube-scheduler-ha-558946" [e99973ac-5718-4769-99e3-282c3c25b8f8] Running
	I0910 17:52:46.091684   24502 system_pods.go:61] "kube-scheduler-ha-558946-m02" [6c57c232-f86e-417c-b3a6-867b3ed443bf] Running
	I0910 17:52:46.091686   24502 system_pods.go:61] "kube-scheduler-ha-558946-m03" [60a36ce7-25b1-4800-86cc-bab6e5516d91] Running
	I0910 17:52:46.091692   24502 system_pods.go:61] "kube-vip-ha-558946" [810f85ef-6900-456e-877e-095d38286613] Running
	I0910 17:52:46.091695   24502 system_pods.go:61] "kube-vip-ha-558946-m02" [59850a02-4ce3-47dc-a250-f18c0fd9533c] Running
	I0910 17:52:46.091700   24502 system_pods.go:61] "kube-vip-ha-558946-m03" [f77d0e8b-731a-4bcb-b175-08686fe82852] Running
	I0910 17:52:46.091703   24502 system_pods.go:61] "storage-provisioner" [baf5cd7e-5266-4d55-bd6c-459257baa463] Running
	I0910 17:52:46.091709   24502 system_pods.go:74] duration metric: took 186.550993ms to wait for pod list to return data ...
	I0910 17:52:46.091718   24502 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:52:46.279119   24502 request.go:632] Waited for 187.318054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0910 17:52:46.279187   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0910 17:52:46.279202   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:46.279215   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:46.279226   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:46.282981   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:46.283105   24502 default_sa.go:45] found service account: "default"
	I0910 17:52:46.283119   24502 default_sa.go:55] duration metric: took 191.39626ms for default service account to be created ...
	I0910 17:52:46.283129   24502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:52:46.479679   24502 request.go:632] Waited for 196.462097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:46.479732   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:46.479737   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:46.479744   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:46.479748   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:46.487264   24502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 17:52:46.494671   24502 system_pods.go:86] 24 kube-system pods found
	I0910 17:52:46.494696   24502 system_pods.go:89] "coredns-6f6b679f8f-5pv7s" [e75ceddc-7576-45f6-8b80-2071bc7fbef8] Running
	I0910 17:52:46.494703   24502 system_pods.go:89] "coredns-6f6b679f8f-fmcmc" [0d79d296-3ee7-4b7b-8869-e45465da70ff] Running
	I0910 17:52:46.494707   24502 system_pods.go:89] "etcd-ha-558946" [d99a9237-7866-40f1-95d6-c6488183479e] Running
	I0910 17:52:46.494711   24502 system_pods.go:89] "etcd-ha-558946-m02" [d22427c5-1548-4bd2-b1c1-5a6a4353077a] Running
	I0910 17:52:46.494714   24502 system_pods.go:89] "etcd-ha-558946-m03" [6d01b402-952c-428d-be87-e461cc07de36] Running
	I0910 17:52:46.494718   24502 system_pods.go:89] "kindnet-mshf2" [cec27b40-9e1f-4c27-9d18-422e75dbc252] Running
	I0910 17:52:46.494721   24502 system_pods.go:89] "kindnet-n8n67" [019cf933-bf89-485d-a837-bf8bbedbc0df] Running
	I0910 17:52:46.494725   24502 system_pods.go:89] "kindnet-sfr7m" [31ccb06a-6f76-4a18-894c-707993f766e5] Running
	I0910 17:52:46.494728   24502 system_pods.go:89] "kube-apiserver-ha-558946" [74003dbd-903b-48de-b85f-973654d0d58e] Running
	I0910 17:52:46.494731   24502 system_pods.go:89] "kube-apiserver-ha-558946-m02" [9136cd3a-a68e-4167-808d-61b33978cf45] Running
	I0910 17:52:46.494735   24502 system_pods.go:89] "kube-apiserver-ha-558946-m03" [ee0b10ae-52c5-4bb9-8eb2-b9921279eab7] Running
	I0910 17:52:46.494739   24502 system_pods.go:89] "kube-controller-manager-ha-558946" [82453b26-31b3-4c6e-8e37-26eb141923fc] Running
	I0910 17:52:46.494743   24502 system_pods.go:89] "kube-controller-manager-ha-558946-m02" [d658071a-4335-4933-88c8-4d2cfccb40df] Running
	I0910 17:52:46.494745   24502 system_pods.go:89] "kube-controller-manager-ha-558946-m03" [935f6235-0c9e-4204-b1ca-c75b2e0946b8] Running
	I0910 17:52:46.494748   24502 system_pods.go:89] "kube-proxy-8ldlx" [a5c5acdd-77fe-432b-80a1-34fd11389f6e] Running
	I0910 17:52:46.494751   24502 system_pods.go:89] "kube-proxy-gjqzx" [35a3fe57-a2d6-4134-8205-ce5c8d09b707] Running
	I0910 17:52:46.494755   24502 system_pods.go:89] "kube-proxy-xggtm" [347371e4-83b7-474c-8924-d33c479d736a] Running
	I0910 17:52:46.494761   24502 system_pods.go:89] "kube-scheduler-ha-558946" [e99973ac-5718-4769-99e3-282c3c25b8f8] Running
	I0910 17:52:46.494764   24502 system_pods.go:89] "kube-scheduler-ha-558946-m02" [6c57c232-f86e-417c-b3a6-867b3ed443bf] Running
	I0910 17:52:46.494770   24502 system_pods.go:89] "kube-scheduler-ha-558946-m03" [60a36ce7-25b1-4800-86cc-bab6e5516d91] Running
	I0910 17:52:46.494773   24502 system_pods.go:89] "kube-vip-ha-558946" [810f85ef-6900-456e-877e-095d38286613] Running
	I0910 17:52:46.494776   24502 system_pods.go:89] "kube-vip-ha-558946-m02" [59850a02-4ce3-47dc-a250-f18c0fd9533c] Running
	I0910 17:52:46.494779   24502 system_pods.go:89] "kube-vip-ha-558946-m03" [f77d0e8b-731a-4bcb-b175-08686fe82852] Running
	I0910 17:52:46.494782   24502 system_pods.go:89] "storage-provisioner" [baf5cd7e-5266-4d55-bd6c-459257baa463] Running
	I0910 17:52:46.494790   24502 system_pods.go:126] duration metric: took 211.653589ms to wait for k8s-apps to be running ...
	I0910 17:52:46.494797   24502 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:52:46.494836   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:52:46.510455   24502 system_svc.go:56] duration metric: took 15.650736ms WaitForService to wait for kubelet
	I0910 17:52:46.510482   24502 kubeadm.go:582] duration metric: took 21.189989541s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:52:46.510501   24502 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:52:46.680122   24502 request.go:632] Waited for 169.552712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes
	I0910 17:52:46.680186   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes
	I0910 17:52:46.680194   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:46.680205   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:46.680215   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:46.683113   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:46.684305   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:52:46.684326   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:52:46.684341   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:52:46.684346   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:52:46.684352   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:52:46.684356   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:52:46.684360   24502 node_conditions.go:105] duration metric: took 173.854209ms to run NodePressure ...
	I0910 17:52:46.684369   24502 start.go:241] waiting for startup goroutines ...
	I0910 17:52:46.684390   24502 start.go:255] writing updated cluster config ...
	I0910 17:52:46.684700   24502 ssh_runner.go:195] Run: rm -f paused
	I0910 17:52:46.734959   24502 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 17:52:46.737419   24502 out.go:177] * Done! kubectl is now configured to use "ha-558946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.280547653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725990770310688052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990640053624379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990639993301041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17eb3a40b6abab09b8551b0deec9412b8856647710a0560601915c527b3992a4,PodSandboxId:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725990639919153851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725990628186458960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172599062
5854752018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e,PodSandboxId:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172599061600
2583127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725990614321988798,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725990614273270200,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d,PodSandboxId:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725990614301936106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509,PodSandboxId:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725990614292937014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebedbae2-17cf-47a2-a165-208271508b9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.297366956Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=baaaf38c-bbe4-491a-b4de-3546a424f02c name=/runtime.v1.RuntimeService/Version
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.297457173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=baaaf38c-bbe4-491a-b4de-3546a424f02c name=/runtime.v1.RuntimeService/Version
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.299269601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6562764-1233-468e-9d55-25567ee7fce8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.299759165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990980299735964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6562764-1233-468e-9d55-25567ee7fce8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.301294702Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05fc1f7d-b951-4bf1-95f8-abe51deb654e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.301359552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05fc1f7d-b951-4bf1-95f8-abe51deb654e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.301590608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725990770310688052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990640053624379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990639993301041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17eb3a40b6abab09b8551b0deec9412b8856647710a0560601915c527b3992a4,PodSandboxId:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725990639919153851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725990628186458960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172599062
5854752018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e,PodSandboxId:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172599061600
2583127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725990614321988798,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725990614273270200,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d,PodSandboxId:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725990614301936106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509,PodSandboxId:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725990614292937014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05fc1f7d-b951-4bf1-95f8-abe51deb654e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.328379450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87eebee4-4aa1-4fa8-b7a8-5abedb958415 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.328477976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87eebee4-4aa1-4fa8-b7a8-5abedb958415 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.328766630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725990770310688052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990640053624379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990639993301041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17eb3a40b6abab09b8551b0deec9412b8856647710a0560601915c527b3992a4,PodSandboxId:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725990639919153851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725990628186458960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172599062
5854752018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e,PodSandboxId:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172599061600
2583127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725990614321988798,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725990614273270200,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d,PodSandboxId:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725990614301936106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509,PodSandboxId:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725990614292937014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87eebee4-4aa1-4fa8-b7a8-5abedb958415 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.329641085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f875ca7b-5050-471c-9f03-dc9fb79110b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.329737434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f875ca7b-5050-471c-9f03-dc9fb79110b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.330035156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725990770310688052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990640053624379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990639993301041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17eb3a40b6abab09b8551b0deec9412b8856647710a0560601915c527b3992a4,PodSandboxId:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725990639919153851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725990628186458960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172599062
5854752018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e,PodSandboxId:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172599061600
2583127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725990614321988798,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725990614273270200,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d,PodSandboxId:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725990614301936106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509,PodSandboxId:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725990614292937014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f875ca7b-5050-471c-9f03-dc9fb79110b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.331561333Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=09fe5b33-3662-447c-895f-6a12e03c2ba1 name=/runtime.v1.ImageService/ListImages
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.332050021Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,RepoTags:[registry.k8s.io/kube-apiserver:v1.31.0],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001],Size_:95233506,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:89437512,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:
1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,RepoTags:[registry.k8s.io/kube-scheduler:v1.31.0],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808],Size_:68420936,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,RepoTags:[registry.k8s.io/kube-proxy:v1.31.0],RepoDigests:[registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe],Size_:92728217,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,RepoTags:[registry.k8s.io/pause:3.10],RepoDigests:[registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a regi
stry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a],Size_:742080,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,RepoTags:[registry.k8s.io/etcd:3.5.15-0],RepoDigests:[registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a],Size_:149009664,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,RepoTags:[docker.io/kindest/kindnetd:v20240730-75a5af0c],RepoDigests:[docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3 docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a],Size_:87165492,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kube-
vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,RepoTags:[docker.io/kindest/kindnetd:v20240813-c6f155d6],RepoDigests:[docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166],Size_:87190579,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=09fe5b33-3662-447c-895f-6a12e03c2ba1 name=/runtime.
v1.ImageService/ListImages
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.333172806Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ea6c3fea-0f21-4526-93ec-44c66c7691a3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.333490804Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-2t4ms,Uid:7344679f-13fd-466b-ad26-a77a20b9386a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990768239865651,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:52:47.624646475Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:baf5cd7e-5266-4d55-bd6c-459257baa463,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1725990639756981493,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-10T17:50:39.436412919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-fmcmc,Uid:0d79d296-3ee7-4b7b-8869-e45465da70ff,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990639747190194,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:50:39.437934920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-5pv7s,Uid:e75ceddc-7576-45f6-8b80-2071bc7fbef8,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1725990639736852026,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:50:39.427210674Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&PodSandboxMetadata{Name:kube-proxy-gjqzx,Uid:35a3fe57-a2d6-4134-8205-ce5c8d09b707,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990625512332240,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-10T17:50:25.198679385Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&PodSandboxMetadata{Name:kindnet-n8n67,Uid:019cf933-bf89-485d-a837-bf8bbedbc0df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990625496466909,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:50:25.180886140Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-558946,Uid:1b2abe11d64857285f0708440a498977,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1725990614095023226,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{kubernetes.io/config.hash: 1b2abe11d64857285f0708440a498977,kubernetes.io/config.seen: 2024-09-10T17:50:13.412970035Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-558946,Uid:5a3bcac99226bc257a0bbe4358f2cf25,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990614093651232,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apis
erver.advertise-address.endpoint: 192.168.39.109:8443,kubernetes.io/config.hash: 5a3bcac99226bc257a0bbe4358f2cf25,kubernetes.io/config.seen: 2024-09-10T17:50:13.412967348Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-558946,Uid:adbd273a78c889b66df701581a530b4b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990614090281764,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: adbd273a78c889b66df701581a530b4b,kubernetes.io/config.seen: 2024-09-10T17:50:13.412969313Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Met
adata:&PodSandboxMetadata{Name:etcd-ha-558946,Uid:066fe90d6e5504c167c416bab3c626a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990614066672529,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: 066fe90d6e5504c167c416bab3c626a5,kubernetes.io/config.seen: 2024-09-10T17:50:13.412964198Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-558946,Uid:f4cb243a9afd92bb7fd74751dcfef866,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990614065842327,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f4cb243a9afd92bb7fd74751dcfef866,kubernetes.io/config.seen: 2024-09-10T17:50:13.412968416Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ea6c3fea-0f21-4526-93ec-44c66c7691a3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.346446457Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78f12863-6de0-42fb-bf10-637d7c65ec5b name=/runtime.v1.RuntimeService/Version
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.346548977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78f12863-6de0-42fb-bf10-637d7c65ec5b name=/runtime.v1.RuntimeService/Version
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.349525080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=885a8c9e-cc9b-455c-8ae4-47f06e549338 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.350045885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990980350020230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=885a8c9e-cc9b-455c-8ae4-47f06e549338 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.352266168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f91775c2-8582-4b1d-9922-0a67b61c4447 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.352345406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f91775c2-8582-4b1d-9922-0a67b61c4447 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:56:20 ha-558946 crio[668]: time="2024-09-10 17:56:20.352650480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725990770310688052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990640053624379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990639993301041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17eb3a40b6abab09b8551b0deec9412b8856647710a0560601915c527b3992a4,PodSandboxId:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725990639919153851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725990628186458960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172599062
5854752018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e,PodSandboxId:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172599061600
2583127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725990614321988798,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725990614273270200,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d,PodSandboxId:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725990614301936106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509,PodSandboxId:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725990614292937014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f91775c2-8582-4b1d-9922-0a67b61c4447 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f35f5f9c0297       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4704ca681891e       busybox-7dff88458-2t4ms
	142a15832796a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   1c4e9776e0278       coredns-6f6b679f8f-5pv7s
	6899c9efcedba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   434931d96929c       coredns-6f6b679f8f-fmcmc
	17eb3a40b6aba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   d537c4783b42f       storage-provisioner
	e119a0b88cc46       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    5 minutes ago       Running             kindnet-cni               0                   70857c92d854f       kindnet-n8n67
	1668374a3d17c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                0                   718077b7bfae6       kube-proxy-gjqzx
	284b2d71723b7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   495ea13704d28       kube-vip-ha-558946
	edfccb881d415       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   8c5d88f2921ad       kube-scheduler-ha-558946
	a97a13adca4b5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   6db7b892990fc       kube-apiserver-ha-558946
	4056c90198fe8       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   56c5eaaefd9dc       kube-controller-manager-ha-558946
	5ebc6afb00309       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   ca3c0af433ced       etcd-ha-558946
	
	
	==> coredns [142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557] <==
	[INFO] 10.244.1.2:38446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121244s
	[INFO] 10.244.1.2:40680 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000102424s
	[INFO] 10.244.1.2:37614 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000113138s
	[INFO] 10.244.1.2:55352 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001904347s
	[INFO] 10.244.0.4:44314 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011515239s
	[INFO] 10.244.0.4:59105 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214169s
	[INFO] 10.244.2.2:52223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140107s
	[INFO] 10.244.2.2:51288 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170301s
	[INFO] 10.244.2.2:43443 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001154075s
	[INFO] 10.244.2.2:45133 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069754s
	[INFO] 10.244.2.2:57378 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111864s
	[INFO] 10.244.1.2:55758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134542s
	[INFO] 10.244.1.2:40786 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001892504s
	[INFO] 10.244.1.2:39596 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093152s
	[INFO] 10.244.1.2:38058 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000155632s
	[INFO] 10.244.0.4:32898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106177s
	[INFO] 10.244.0.4:54445 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077161s
	[INFO] 10.244.0.4:39012 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000203844s
	[INFO] 10.244.2.2:51010 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117281s
	[INFO] 10.244.2.2:51174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141181s
	[INFO] 10.244.2.2:55393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185738s
	[INFO] 10.244.2.2:37830 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000216713s
	[INFO] 10.244.2.2:45453 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139889s
	[INFO] 10.244.1.2:46063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168728s
	[INFO] 10.244.1.2:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116561s
	
	
	==> coredns [6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8] <==
	[INFO] 10.244.0.4:59904 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140397s
	[INFO] 10.244.0.4:35340 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122725s
	[INFO] 10.244.0.4:49436 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125155s
	[INFO] 10.244.0.4:34813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146204s
	[INFO] 10.244.2.2:34474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001792363s
	[INFO] 10.244.2.2:38827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095844s
	[INFO] 10.244.2.2:52413 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066168s
	[INFO] 10.244.1.2:60142 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001269934s
	[INFO] 10.244.1.2:54320 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135946s
	[INFO] 10.244.1.2:51279 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144222s
	[INFO] 10.244.1.2:40290 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149426s
	[INFO] 10.244.0.4:53110 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120693s
	[INFO] 10.244.2.2:42194 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245782s
	[INFO] 10.244.2.2:59001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012574s
	[INFO] 10.244.1.2:60266 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150897s
	[INFO] 10.244.1.2:57758 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013393s
	[INFO] 10.244.1.2:37225 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099566s
	[INFO] 10.244.1.2:49900 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113789s
	[INFO] 10.244.0.4:37306 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000237921s
	[INFO] 10.244.0.4:36705 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168887s
	[INFO] 10.244.0.4:34074 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013369s
	[INFO] 10.244.0.4:34879 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107543s
	[INFO] 10.244.2.2:60365 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000255288s
	[INFO] 10.244.1.2:49914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123225s
	[INFO] 10.244.1.2:59420 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122155s
	
	
	==> describe nodes <==
	Name:               ha-558946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_50_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:50:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:56:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:52:53 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:52:53 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:52:53 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:52:53 +0000   Tue, 10 Sep 2024 17:50:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-558946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6888e6da1bdd45dda1c087615a5c1996
	  System UUID:                6888e6da-1bdd-45dd-a1c0-87615a5c1996
	  Boot ID:                    a2579398-c9ae-48e0-a407-b08542361a94
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2t4ms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 coredns-6f6b679f8f-5pv7s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m55s
	  kube-system                 coredns-6f6b679f8f-fmcmc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m55s
	  kube-system                 etcd-ha-558946                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m2s
	  kube-system                 kindnet-n8n67                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m55s
	  kube-system                 kube-apiserver-ha-558946             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-controller-manager-ha-558946    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-proxy-gjqzx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-scheduler-ha-558946             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-vip-ha-558946                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m54s                kube-proxy       
	  Normal  NodeAllocatableEnforced  6m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m7s (x8 over 6m7s)  kubelet          Node ha-558946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s (x8 over 6m7s)  kubelet          Node ha-558946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s (x7 over 6m7s)  kubelet          Node ha-558946 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m                   kubelet          Node ha-558946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m                   kubelet          Node ha-558946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m                   kubelet          Node ha-558946 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m56s                node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal  NodeReady                5m41s                kubelet          Node ha-558946 status is now: NodeReady
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal  RegisteredNode           3m50s                node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	
	
	Name:               ha-558946-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_51_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:51:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:53:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 10 Sep 2024 17:53:13 +0000   Tue, 10 Sep 2024 17:54:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 10 Sep 2024 17:53:13 +0000   Tue, 10 Sep 2024 17:54:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 10 Sep 2024 17:53:13 +0000   Tue, 10 Sep 2024 17:54:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 10 Sep 2024 17:53:13 +0000   Tue, 10 Sep 2024 17:54:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-558946-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db1a36bf29714274bd4e3db4349b13e5
	  System UUID:                db1a36bf-2971-4274-bd4e-3db4349b13e5
	  Boot ID:                    a1e6458f-d889-45f0-9111-7341b37855d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xnl8m                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 etcd-ha-558946-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m7s
	  kube-system                 kindnet-sfr7m                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m10s
	  kube-system                 kube-apiserver-ha-558946-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-controller-manager-ha-558946-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-proxy-xggtm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-scheduler-ha-558946-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-vip-ha-558946-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node ha-558946-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node ha-558946-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s (x7 over 5m10s)  kubelet          Node ha-558946-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           5m1s                   node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-558946-m02 status is now: NodeNotReady
	
	
	Name:               ha-558946-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_52_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:52:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:56:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:52:51 +0000   Tue, 10 Sep 2024 17:52:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:52:51 +0000   Tue, 10 Sep 2024 17:52:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:52:51 +0000   Tue, 10 Sep 2024 17:52:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:52:51 +0000   Tue, 10 Sep 2024 17:52:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-558946-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bf15e91753540d5b2e0f1553e9cfa68
	  System UUID:                8bf15e91-7535-40d5-b2e0-f1553e9cfa68
	  Boot ID:                    1d53ab20-8447-45b7-9abb-9b9612c466dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-szkr7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 etcd-ha-558946-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m58s
	  kube-system                 kindnet-mshf2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m
	  kube-system                 kube-apiserver-ha-558946-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-controller-manager-ha-558946-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-proxy-8ldlx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-scheduler-ha-558946-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-vip-ha-558946-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 3m54s            kube-proxy       
	  Normal  NodeHasSufficientMemory  4m (x8 over 4m)  kubelet          Node ha-558946-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x8 over 4m)  kubelet          Node ha-558946-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x7 over 4m)  kubelet          Node ha-558946-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s            node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	  Normal  RegisteredNode           3m56s            node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	  Normal  RegisteredNode           3m50s            node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	
	
	Name:               ha-558946-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_53_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:53:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:56:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:53:51 +0000   Tue, 10 Sep 2024 17:53:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:53:51 +0000   Tue, 10 Sep 2024 17:53:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:53:51 +0000   Tue, 10 Sep 2024 17:53:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:53:51 +0000   Tue, 10 Sep 2024 17:53:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-558946-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aded0f54a0334cb59bab04e35bcf99b0
	  System UUID:                aded0f54-a033-4cb5-9bab-04e35bcf99b0
	  Boot ID:                    1351708d-4980-4151-bfae-1b9049afb79c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7kzcw       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-mk6xt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m54s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-558946-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-558946-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-558946-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal  RegisteredNode           2m55s            node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal  NodeReady                2m41s            kubelet          Node ha-558946-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep10 17:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050705] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039929] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.788388] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.469715] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.561365] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep10 17:50] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.058035] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055902] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.190997] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.121180] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.267314] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.918739] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.478653] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.062428] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.320707] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.078655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.553971] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.155608] kauditd_printk_skb: 38 callbacks suppressed
	[Sep10 17:51] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa] <==
	{"level":"warn","ts":"2024-09-10T17:56:20.641799Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.653705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.659832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.660753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.661753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.671418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.680405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.686197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.689301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.693870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.697355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.703364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.709686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.718783Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.725386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.729777Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.734928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.741241Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.747552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.751141Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.753843Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.757691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.764199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.769891Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:56:20.787374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:56:20 up 6 min,  0 users,  load average: 0.52, 0.41, 0.19
	Linux ha-558946 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d] <==
	I0910 17:55:49.332662       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:55:59.337868       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:55:59.337998       1 main.go:299] handling current node
	I0910 17:55:59.338025       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:55:59.338043       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:55:59.338227       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:55:59.338256       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:55:59.338340       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:55:59.338359       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:56:09.332332       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:56:09.332468       1 main.go:299] handling current node
	I0910 17:56:09.332502       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:56:09.332520       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:56:09.332670       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:56:09.332690       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:56:09.332758       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:56:09.332776       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:56:19.339452       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:56:19.339554       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:56:19.339726       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:56:19.339754       1 main.go:299] handling current node
	I0910 17:56:19.339766       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:56:19.339770       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:56:19.339827       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:56:19.339847       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d] <==
	I0910 17:50:20.369262       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 17:50:20.392833       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0910 17:50:20.409676       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 17:50:25.139036       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0910 17:50:25.200259       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0910 17:52:21.636891       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.163µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0910 17:52:21.636925       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0910 17:52:21.638695       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0910 17:52:21.639913       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0910 17:52:21.641267       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.714645ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0910 17:52:51.449514       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57658: use of closed network connection
	E0910 17:52:51.627377       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57686: use of closed network connection
	E0910 17:52:51.816302       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57704: use of closed network connection
	E0910 17:52:52.014303       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57722: use of closed network connection
	E0910 17:52:52.199349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57752: use of closed network connection
	E0910 17:52:52.392530       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57776: use of closed network connection
	E0910 17:52:52.572461       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57794: use of closed network connection
	E0910 17:52:52.755547       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57814: use of closed network connection
	E0910 17:52:52.934221       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57832: use of closed network connection
	E0910 17:52:53.222422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57862: use of closed network connection
	E0910 17:52:53.394049       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57884: use of closed network connection
	E0910 17:52:53.576589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57906: use of closed network connection
	E0910 17:52:53.744810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57922: use of closed network connection
	E0910 17:52:53.920034       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57942: use of closed network connection
	E0910 17:52:54.085568       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57952: use of closed network connection
	
	
	==> kube-controller-manager [4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509] <==
	I0910 17:53:20.701380       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-558946-m04" podCIDRs=["10.244.3.0/24"]
	I0910 17:53:20.701620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:20.703776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:20.730928       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:20.983932       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:21.403938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:24.445958       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-558946-m04"
	I0910 17:53:24.469703       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:24.994041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:25.023975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:25.510283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:25.584729       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:30.878599       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:39.819592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-558946-m04"
	I0910 17:53:39.819773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:39.834972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:40.009388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:51.518945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:54:35.538503       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-558946-m04"
	I0910 17:54:35.542258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m02"
	I0910 17:54:35.568467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m02"
	I0910 17:54:35.609829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.148374ms"
	I0910 17:54:35.609955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.326µs"
	I0910 17:54:39.569032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m02"
	I0910 17:54:40.778882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m02"
	
	
	==> kube-proxy [1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:50:26.217741       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:50:26.246303       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.109"]
	E0910 17:50:26.246439       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:50:26.302452       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:50:26.302542       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:50:26.302583       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:50:26.305035       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:50:26.305345       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:50:26.305506       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:50:26.307212       1 config.go:197] "Starting service config controller"
	I0910 17:50:26.307266       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:50:26.307302       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:50:26.307317       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:50:26.308183       1 config.go:326] "Starting node config controller"
	I0910 17:50:26.308271       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:50:26.407679       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 17:50:26.407768       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:50:26.409170       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc] <==
	W0910 17:50:18.630667       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 17:50:18.630755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.651190       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 17:50:18.651333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.664238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 17:50:18.664306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.749872       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:50:18.749932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.754538       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 17:50:18.754610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.775133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:50:18.775251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.783301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 17:50:18.783579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.783311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 17:50:18.783701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 17:50:20.762780       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 17:53:20.783017       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7kzcw\": pod kindnet-7kzcw is already assigned to node \"ha-558946-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7kzcw" node="ha-558946-m04"
	E0910 17:53:20.783217       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a925295e-bc22-4154-850e-79962508c7ac(kube-system/kindnet-7kzcw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7kzcw"
	E0910 17:53:20.783245       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7kzcw\": pod kindnet-7kzcw is already assigned to node \"ha-558946-m04\"" pod="kube-system/kindnet-7kzcw"
	I0910 17:53:20.783283       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7kzcw" node="ha-558946-m04"
	E0910 17:53:20.926971       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9xbp8\": pod kindnet-9xbp8 is already assigned to node \"ha-558946-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9xbp8" node="ha-558946-m04"
	E0910 17:53:20.927165       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d228e8b7-bd1d-442c-bf6a-2240d8c2ac04(kube-system/kindnet-9xbp8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9xbp8"
	E0910 17:53:20.927360       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9xbp8\": pod kindnet-9xbp8 is already assigned to node \"ha-558946-m04\"" pod="kube-system/kindnet-9xbp8"
	I0910 17:53:20.927386       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9xbp8" node="ha-558946-m04"
	
	
	==> kubelet <==
	Sep 10 17:55:10 ha-558946 kubelet[1318]: E0910 17:55:10.404276    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990910400566603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:20 ha-558946 kubelet[1318]: E0910 17:55:20.297578    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 17:55:20 ha-558946 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 17:55:20 ha-558946 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 17:55:20 ha-558946 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 17:55:20 ha-558946 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 17:55:20 ha-558946 kubelet[1318]: E0910 17:55:20.407319    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990920406365989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:20 ha-558946 kubelet[1318]: E0910 17:55:20.407375    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990920406365989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:30 ha-558946 kubelet[1318]: E0910 17:55:30.408678    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990930408439603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:30 ha-558946 kubelet[1318]: E0910 17:55:30.408782    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990930408439603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:40 ha-558946 kubelet[1318]: E0910 17:55:40.410691    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990940410012375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:40 ha-558946 kubelet[1318]: E0910 17:55:40.411346    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990940410012375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:50 ha-558946 kubelet[1318]: E0910 17:55:50.414043    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990950413461728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:50 ha-558946 kubelet[1318]: E0910 17:55:50.414435    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990950413461728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:00 ha-558946 kubelet[1318]: E0910 17:56:00.415948    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990960415570032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:00 ha-558946 kubelet[1318]: E0910 17:56:00.415984    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990960415570032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:10 ha-558946 kubelet[1318]: E0910 17:56:10.417278    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990970416781710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:10 ha-558946 kubelet[1318]: E0910 17:56:10.417619    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990970416781710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:20 ha-558946 kubelet[1318]: E0910 17:56:20.301862    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 17:56:20 ha-558946 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 17:56:20 ha-558946 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 17:56:20 ha-558946 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 17:56:20 ha-558946 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 17:56:20 ha-558946 kubelet[1318]: E0910 17:56:20.419299    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990980418917252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:20 ha-558946 kubelet[1318]: E0910 17:56:20.419321    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990980418917252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-558946 -n ha-558946
helpers_test.go:261: (dbg) Run:  kubectl --context ha-558946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr: exit status 3 (3.200569587s)

                                                
                                                
-- stdout --
	ha-558946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-558946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:56:25.289627   29197 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:56:25.289725   29197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:25.289734   29197 out.go:358] Setting ErrFile to fd 2...
	I0910 17:56:25.289738   29197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:25.289908   29197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:56:25.290064   29197 out.go:352] Setting JSON to false
	I0910 17:56:25.290092   29197 mustload.go:65] Loading cluster: ha-558946
	I0910 17:56:25.290200   29197 notify.go:220] Checking for updates...
	I0910 17:56:25.290545   29197 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:56:25.290564   29197 status.go:255] checking status of ha-558946 ...
	I0910 17:56:25.291025   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:25.291094   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:25.311275   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42723
	I0910 17:56:25.311800   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:25.312318   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:25.312346   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:25.312703   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:25.312936   29197 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:56:25.314594   29197 status.go:330] ha-558946 host status = "Running" (err=<nil>)
	I0910 17:56:25.314611   29197 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:25.314887   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:25.314930   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:25.329470   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38043
	I0910 17:56:25.329793   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:25.330240   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:25.330275   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:25.330541   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:25.330705   29197 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:56:25.333218   29197 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:25.333643   29197 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:25.333677   29197 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:25.333809   29197 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:25.334136   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:25.334179   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:25.348580   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39733
	I0910 17:56:25.348950   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:25.349358   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:25.349377   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:25.349653   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:25.349797   29197 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:56:25.349958   29197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:25.349989   29197 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:56:25.352468   29197 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:25.352840   29197 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:25.352867   29197 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:25.352995   29197 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:56:25.353143   29197 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:56:25.353291   29197 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:56:25.353421   29197 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:56:25.432411   29197 ssh_runner.go:195] Run: systemctl --version
	I0910 17:56:25.438291   29197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:25.452889   29197 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:25.452919   29197 api_server.go:166] Checking apiserver status ...
	I0910 17:56:25.452951   29197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:25.470702   29197 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup
	W0910 17:56:25.480575   29197 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:25.480609   29197 ssh_runner.go:195] Run: ls
	I0910 17:56:25.485386   29197 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:25.489421   29197 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:25.489444   29197 status.go:422] ha-558946 apiserver status = Running (err=<nil>)
	I0910 17:56:25.489456   29197 status.go:257] ha-558946 status: &{Name:ha-558946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:25.489474   29197 status.go:255] checking status of ha-558946-m02 ...
	I0910 17:56:25.489810   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:25.489845   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:25.504284   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0910 17:56:25.504676   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:25.505166   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:25.505189   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:25.505456   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:25.505613   29197 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:56:25.507090   29197 status.go:330] ha-558946-m02 host status = "Running" (err=<nil>)
	I0910 17:56:25.507103   29197 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:25.507404   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:25.507450   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:25.521690   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I0910 17:56:25.522037   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:25.522448   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:25.522465   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:25.522804   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:25.522966   29197 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:56:25.525786   29197 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:25.526214   29197 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:25.526247   29197 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:25.526395   29197 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:25.526703   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:25.526741   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:25.541290   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I0910 17:56:25.541630   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:25.542047   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:25.542069   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:25.542359   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:25.542519   29197 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:56:25.542670   29197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:25.542691   29197 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:56:25.545401   29197 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:25.545782   29197 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:25.545803   29197 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:25.545943   29197 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:56:25.546085   29197 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:56:25.546231   29197 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:56:25.546367   29197 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	W0910 17:56:28.097332   29197 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	W0910 17:56:28.097472   29197 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0910 17:56:28.097499   29197 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:28.097510   29197 status.go:257] ha-558946-m02 status: &{Name:ha-558946-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 17:56:28.097540   29197 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:28.097552   29197 status.go:255] checking status of ha-558946-m03 ...
	I0910 17:56:28.097943   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:28.097993   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:28.113018   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33789
	I0910 17:56:28.113450   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:28.113966   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:28.113991   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:28.114324   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:28.114500   29197 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:56:28.116010   29197 status.go:330] ha-558946-m03 host status = "Running" (err=<nil>)
	I0910 17:56:28.116025   29197 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:28.116359   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:28.116394   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:28.130893   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43921
	I0910 17:56:28.131278   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:28.131719   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:28.131744   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:28.132081   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:28.132288   29197 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:56:28.135038   29197 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:28.135470   29197 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:28.135496   29197 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:28.135669   29197 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:28.136016   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:28.136055   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:28.150062   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40355
	I0910 17:56:28.150436   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:28.150832   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:28.150850   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:28.151123   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:28.151300   29197 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:56:28.151443   29197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:28.151463   29197 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:56:28.153852   29197 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:28.154209   29197 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:28.154235   29197 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:28.154366   29197 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:56:28.154539   29197 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:56:28.154681   29197 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:56:28.154810   29197 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:56:28.241230   29197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:28.257454   29197 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:28.257480   29197 api_server.go:166] Checking apiserver status ...
	I0910 17:56:28.257520   29197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:28.272786   29197 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0910 17:56:28.283273   29197 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:28.283317   29197 ssh_runner.go:195] Run: ls
	I0910 17:56:28.287509   29197 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:28.294541   29197 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:28.294564   29197 status.go:422] ha-558946-m03 apiserver status = Running (err=<nil>)
	I0910 17:56:28.294572   29197 status.go:257] ha-558946-m03 status: &{Name:ha-558946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:28.294596   29197 status.go:255] checking status of ha-558946-m04 ...
	I0910 17:56:28.294916   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:28.294954   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:28.310213   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0910 17:56:28.310642   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:28.311128   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:28.311153   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:28.311469   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:28.311673   29197 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:56:28.313114   29197 status.go:330] ha-558946-m04 host status = "Running" (err=<nil>)
	I0910 17:56:28.313127   29197 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:28.313499   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:28.313549   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:28.328229   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0910 17:56:28.328569   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:28.329052   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:28.329091   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:28.329374   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:28.329583   29197 main.go:141] libmachine: (ha-558946-m04) Calling .GetIP
	I0910 17:56:28.332252   29197 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:28.332738   29197 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:28.332761   29197 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:28.332946   29197 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:28.333359   29197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:28.333400   29197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:28.347807   29197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I0910 17:56:28.348140   29197 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:28.348528   29197 main.go:141] libmachine: Using API Version  1
	I0910 17:56:28.348548   29197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:28.348858   29197 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:28.349033   29197 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 17:56:28.349197   29197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:28.349213   29197 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 17:56:28.351699   29197 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:28.352024   29197 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:28.352039   29197 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:28.352171   29197 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 17:56:28.352320   29197 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 17:56:28.352469   29197 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 17:56:28.352596   29197 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 17:56:28.436162   29197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:28.450282   29197 status.go:257] ha-558946-m04 status: &{Name:ha-558946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr: exit status 3 (2.407671155s)

                                                
                                                
-- stdout --
	ha-558946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-558946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:56:29.165545   29296 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:56:29.165677   29296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:29.165687   29296 out.go:358] Setting ErrFile to fd 2...
	I0910 17:56:29.165693   29296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:29.165859   29296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:56:29.166055   29296 out.go:352] Setting JSON to false
	I0910 17:56:29.166087   29296 mustload.go:65] Loading cluster: ha-558946
	I0910 17:56:29.166203   29296 notify.go:220] Checking for updates...
	I0910 17:56:29.166499   29296 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:56:29.166514   29296 status.go:255] checking status of ha-558946 ...
	I0910 17:56:29.166854   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:29.166919   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:29.183156   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I0910 17:56:29.183704   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:29.184371   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:29.184395   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:29.184895   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:29.185040   29296 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:56:29.187393   29296 status.go:330] ha-558946 host status = "Running" (err=<nil>)
	I0910 17:56:29.187411   29296 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:29.187817   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:29.187862   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:29.203005   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34881
	I0910 17:56:29.203439   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:29.203926   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:29.203947   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:29.204207   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:29.204384   29296 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:56:29.206963   29296 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:29.207315   29296 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:29.207344   29296 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:29.207466   29296 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:29.207735   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:29.207771   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:29.223499   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42785
	I0910 17:56:29.223908   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:29.224319   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:29.224341   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:29.224643   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:29.224848   29296 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:56:29.225023   29296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:29.225050   29296 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:56:29.227629   29296 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:29.228076   29296 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:29.228113   29296 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:29.228244   29296 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:56:29.228427   29296 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:56:29.228544   29296 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:56:29.228653   29296 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:56:29.316327   29296 ssh_runner.go:195] Run: systemctl --version
	I0910 17:56:29.322133   29296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:29.336729   29296 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:29.336759   29296 api_server.go:166] Checking apiserver status ...
	I0910 17:56:29.336787   29296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:29.350886   29296 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup
	W0910 17:56:29.361532   29296 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:29.361579   29296 ssh_runner.go:195] Run: ls
	I0910 17:56:29.365615   29296 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:29.371336   29296 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:29.371358   29296 status.go:422] ha-558946 apiserver status = Running (err=<nil>)
	I0910 17:56:29.371370   29296 status.go:257] ha-558946 status: &{Name:ha-558946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:29.371396   29296 status.go:255] checking status of ha-558946-m02 ...
	I0910 17:56:29.371795   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:29.371841   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:29.386528   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I0910 17:56:29.386942   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:29.387351   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:29.387371   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:29.387655   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:29.387831   29296 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:56:29.389411   29296 status.go:330] ha-558946-m02 host status = "Running" (err=<nil>)
	I0910 17:56:29.389425   29296 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:29.389704   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:29.389745   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:29.403911   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
	I0910 17:56:29.404327   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:29.404866   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:29.404890   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:29.405213   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:29.405405   29296 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:56:29.407961   29296 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:29.408360   29296 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:29.408383   29296 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:29.408501   29296 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:29.408903   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:29.408939   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:29.423138   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0910 17:56:29.423443   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:29.423771   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:29.423788   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:29.424041   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:29.424191   29296 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:56:29.424349   29296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:29.424372   29296 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:56:29.427121   29296 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:29.427529   29296 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:29.427564   29296 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:29.427712   29296 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:56:29.427859   29296 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:56:29.427968   29296 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:56:29.428071   29296 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	W0910 17:56:31.169335   29296 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	W0910 17:56:31.169470   29296 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0910 17:56:31.169494   29296 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:31.169504   29296 status.go:257] ha-558946-m02 status: &{Name:ha-558946-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 17:56:31.169525   29296 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:31.169545   29296 status.go:255] checking status of ha-558946-m03 ...
	I0910 17:56:31.169913   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:31.169954   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:31.185819   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0910 17:56:31.186189   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:31.186629   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:31.186649   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:31.186940   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:31.187121   29296 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:56:31.188576   29296 status.go:330] ha-558946-m03 host status = "Running" (err=<nil>)
	I0910 17:56:31.188594   29296 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:31.188925   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:31.188987   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:31.204089   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0910 17:56:31.204499   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:31.204967   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:31.204993   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:31.205326   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:31.205525   29296 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:56:31.208436   29296 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:31.208876   29296 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:31.208897   29296 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:31.209058   29296 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:31.209379   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:31.209411   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:31.224531   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0910 17:56:31.224952   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:31.225416   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:31.225449   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:31.225751   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:31.225918   29296 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:56:31.226123   29296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:31.226141   29296 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:56:31.228915   29296 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:31.229333   29296 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:31.229356   29296 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:31.229517   29296 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:56:31.229664   29296 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:56:31.229842   29296 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:56:31.229975   29296 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:56:31.316577   29296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:31.331196   29296 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:31.331220   29296 api_server.go:166] Checking apiserver status ...
	I0910 17:56:31.331249   29296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:31.346592   29296 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0910 17:56:31.356430   29296 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:31.356477   29296 ssh_runner.go:195] Run: ls
	I0910 17:56:31.361335   29296 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:31.365686   29296 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:31.365707   29296 status.go:422] ha-558946-m03 apiserver status = Running (err=<nil>)
	I0910 17:56:31.365715   29296 status.go:257] ha-558946-m03 status: &{Name:ha-558946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:31.365749   29296 status.go:255] checking status of ha-558946-m04 ...
	I0910 17:56:31.366097   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:31.366143   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:31.381621   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
	I0910 17:56:31.382012   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:31.382507   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:31.382527   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:31.382877   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:31.383040   29296 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:56:31.384679   29296 status.go:330] ha-558946-m04 host status = "Running" (err=<nil>)
	I0910 17:56:31.384698   29296 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:31.384977   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:31.385013   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:31.400548   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0910 17:56:31.400921   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:31.401330   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:31.401347   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:31.401634   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:31.401843   29296 main.go:141] libmachine: (ha-558946-m04) Calling .GetIP
	I0910 17:56:31.404684   29296 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:31.405145   29296 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:31.405179   29296 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:31.405330   29296 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:31.405624   29296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:31.405655   29296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:31.421420   29296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36271
	I0910 17:56:31.421782   29296 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:31.422257   29296 main.go:141] libmachine: Using API Version  1
	I0910 17:56:31.422279   29296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:31.422583   29296 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:31.422761   29296 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 17:56:31.422929   29296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:31.422946   29296 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 17:56:31.425442   29296 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:31.425964   29296 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:31.426005   29296 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:31.426115   29296 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 17:56:31.426278   29296 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 17:56:31.426425   29296 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 17:56:31.426570   29296 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 17:56:31.512541   29296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:31.527581   29296 status.go:257] ha-558946-m04 status: &{Name:ha-558946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
E0910 17:56:35.171467   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr: exit status 3 (4.971398824s)

                                                
                                                
-- stdout --
	ha-558946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-558946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:56:32.985836   29396 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:56:32.986066   29396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:32.986075   29396 out.go:358] Setting ErrFile to fd 2...
	I0910 17:56:32.986079   29396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:32.986254   29396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:56:32.986395   29396 out.go:352] Setting JSON to false
	I0910 17:56:32.986419   29396 mustload.go:65] Loading cluster: ha-558946
	I0910 17:56:32.986515   29396 notify.go:220] Checking for updates...
	I0910 17:56:32.986760   29396 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:56:32.986773   29396 status.go:255] checking status of ha-558946 ...
	I0910 17:56:32.987168   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:32.987207   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:33.006174   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0910 17:56:33.006549   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:33.007056   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:33.007077   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:33.007468   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:33.007671   29396 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:56:33.009219   29396 status.go:330] ha-558946 host status = "Running" (err=<nil>)
	I0910 17:56:33.009235   29396 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:33.009523   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:33.009556   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:33.023617   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0910 17:56:33.023964   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:33.024338   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:33.024359   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:33.024600   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:33.024755   29396 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:56:33.027276   29396 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:33.027692   29396 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:33.027740   29396 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:33.027857   29396 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:33.028163   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:33.028197   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:33.043607   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45325
	I0910 17:56:33.043922   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:33.044336   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:33.044361   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:33.044687   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:33.044897   29396 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:56:33.045117   29396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:33.045155   29396 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:56:33.047723   29396 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:33.048167   29396 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:33.048193   29396 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:33.048337   29396 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:56:33.048500   29396 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:56:33.048661   29396 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:56:33.048801   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:56:33.129613   29396 ssh_runner.go:195] Run: systemctl --version
	I0910 17:56:33.136321   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:33.152679   29396 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:33.152706   29396 api_server.go:166] Checking apiserver status ...
	I0910 17:56:33.152740   29396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:33.165832   29396 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup
	W0910 17:56:33.175576   29396 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:33.175629   29396 ssh_runner.go:195] Run: ls
	I0910 17:56:33.179605   29396 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:33.183628   29396 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:33.183646   29396 status.go:422] ha-558946 apiserver status = Running (err=<nil>)
	I0910 17:56:33.183655   29396 status.go:257] ha-558946 status: &{Name:ha-558946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:33.183670   29396 status.go:255] checking status of ha-558946-m02 ...
	I0910 17:56:33.184062   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:33.184097   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:33.198962   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I0910 17:56:33.199369   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:33.199834   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:33.199853   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:33.200177   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:33.200363   29396 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:56:33.201835   29396 status.go:330] ha-558946-m02 host status = "Running" (err=<nil>)
	I0910 17:56:33.201849   29396 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:33.202146   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:33.202189   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:33.216882   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0910 17:56:33.217297   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:33.217775   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:33.217796   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:33.218104   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:33.218249   29396 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:56:33.220891   29396 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:33.221354   29396 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:33.221387   29396 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:33.221510   29396 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:33.221786   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:33.221815   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:33.236230   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0910 17:56:33.236670   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:33.237109   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:33.237136   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:33.237428   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:33.237576   29396 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:56:33.237752   29396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:33.237772   29396 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:56:33.240012   29396 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:33.240443   29396 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:33.240473   29396 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:33.240587   29396 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:56:33.240741   29396 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:56:33.240864   29396 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:56:33.240972   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	W0910 17:56:34.241279   29396 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:34.241321   29396 retry.go:31] will retry after 252.774472ms: dial tcp 192.168.39.96:22: connect: no route to host
	W0910 17:56:37.569298   29396 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	W0910 17:56:37.569376   29396 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0910 17:56:37.569402   29396 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:37.569411   29396 status.go:257] ha-558946-m02 status: &{Name:ha-558946-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 17:56:37.569434   29396 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:37.569442   29396 status.go:255] checking status of ha-558946-m03 ...
	I0910 17:56:37.569755   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:37.569798   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:37.584418   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I0910 17:56:37.584937   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:37.585426   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:37.585448   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:37.585724   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:37.585892   29396 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:56:37.587369   29396 status.go:330] ha-558946-m03 host status = "Running" (err=<nil>)
	I0910 17:56:37.587386   29396 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:37.587666   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:37.587701   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:37.602394   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0910 17:56:37.602780   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:37.603165   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:37.603182   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:37.603450   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:37.603631   29396 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:56:37.605928   29396 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:37.606331   29396 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:37.606359   29396 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:37.606479   29396 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:37.606790   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:37.606839   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:37.622205   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0910 17:56:37.622583   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:37.623026   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:37.623042   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:37.623320   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:37.623493   29396 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:56:37.623655   29396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:37.623671   29396 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:56:37.626123   29396 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:37.626533   29396 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:37.626568   29396 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:37.626675   29396 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:56:37.626853   29396 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:56:37.627011   29396 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:56:37.627122   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:56:37.712613   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:37.728130   29396 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:37.728157   29396 api_server.go:166] Checking apiserver status ...
	I0910 17:56:37.728185   29396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:37.741232   29396 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0910 17:56:37.750038   29396 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:37.750079   29396 ssh_runner.go:195] Run: ls
	I0910 17:56:37.754977   29396 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:37.759547   29396 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:37.759577   29396 status.go:422] ha-558946-m03 apiserver status = Running (err=<nil>)
	I0910 17:56:37.759589   29396 status.go:257] ha-558946-m03 status: &{Name:ha-558946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:37.759609   29396 status.go:255] checking status of ha-558946-m04 ...
	I0910 17:56:37.759906   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:37.759948   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:37.775204   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0910 17:56:37.775563   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:37.776022   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:37.776040   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:37.776308   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:37.776497   29396 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:56:37.778116   29396 status.go:330] ha-558946-m04 host status = "Running" (err=<nil>)
	I0910 17:56:37.778131   29396 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:37.778400   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:37.778430   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:37.792599   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0910 17:56:37.792940   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:37.793400   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:37.793424   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:37.793694   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:37.793843   29396 main.go:141] libmachine: (ha-558946-m04) Calling .GetIP
	I0910 17:56:37.796312   29396 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:37.796791   29396 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:37.796815   29396 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:37.796935   29396 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:37.797250   29396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:37.797283   29396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:37.812196   29396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37731
	I0910 17:56:37.812577   29396 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:37.813092   29396 main.go:141] libmachine: Using API Version  1
	I0910 17:56:37.813117   29396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:37.813418   29396 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:37.813610   29396 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 17:56:37.813802   29396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:37.813828   29396 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 17:56:37.816615   29396 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:37.816998   29396 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:37.817026   29396 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:37.817167   29396 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 17:56:37.817333   29396 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 17:56:37.817480   29396 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 17:56:37.817602   29396 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 17:56:37.900446   29396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:37.916042   29396 status.go:257] ha-558946-m04 status: &{Name:ha-558946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
E0910 17:56:40.398841   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr: exit status 3 (4.282989902s)

                                                
                                                
-- stdout --
	ha-558946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-558946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:56:40.044114   29498 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:56:40.044393   29498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:40.044404   29498 out.go:358] Setting ErrFile to fd 2...
	I0910 17:56:40.044409   29498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:40.044575   29498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:56:40.044733   29498 out.go:352] Setting JSON to false
	I0910 17:56:40.044757   29498 mustload.go:65] Loading cluster: ha-558946
	I0910 17:56:40.044845   29498 notify.go:220] Checking for updates...
	I0910 17:56:40.045203   29498 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:56:40.045219   29498 status.go:255] checking status of ha-558946 ...
	I0910 17:56:40.045638   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:40.045696   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:40.065388   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I0910 17:56:40.065807   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:40.066486   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:40.066508   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:40.066809   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:40.066965   29498 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:56:40.068660   29498 status.go:330] ha-558946 host status = "Running" (err=<nil>)
	I0910 17:56:40.068676   29498 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:40.068961   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:40.068994   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:40.083414   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0910 17:56:40.083743   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:40.084162   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:40.084190   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:40.084551   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:40.084720   29498 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:56:40.087144   29498 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:40.087509   29498 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:40.087539   29498 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:40.087654   29498 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:40.087917   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:40.087948   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:40.102663   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36507
	I0910 17:56:40.103018   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:40.103412   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:40.103432   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:40.103667   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:40.103830   29498 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:56:40.103986   29498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:40.104014   29498 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:56:40.106399   29498 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:40.106733   29498 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:40.106766   29498 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:40.106905   29498 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:56:40.107052   29498 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:56:40.107183   29498 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:56:40.107366   29498 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:56:40.194543   29498 ssh_runner.go:195] Run: systemctl --version
	I0910 17:56:40.201521   29498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:40.220371   29498 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:40.220405   29498 api_server.go:166] Checking apiserver status ...
	I0910 17:56:40.220463   29498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:40.235240   29498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup
	W0910 17:56:40.246927   29498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:40.246967   29498 ssh_runner.go:195] Run: ls
	I0910 17:56:40.251277   29498 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:40.255137   29498 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:40.255156   29498 status.go:422] ha-558946 apiserver status = Running (err=<nil>)
	I0910 17:56:40.255165   29498 status.go:257] ha-558946 status: &{Name:ha-558946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:40.255187   29498 status.go:255] checking status of ha-558946-m02 ...
	I0910 17:56:40.255489   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:40.255527   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:40.270160   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0910 17:56:40.270621   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:40.271092   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:40.271112   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:40.271409   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:40.271559   29498 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:56:40.273161   29498 status.go:330] ha-558946-m02 host status = "Running" (err=<nil>)
	I0910 17:56:40.273177   29498 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:40.273442   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:40.273476   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:40.287508   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0910 17:56:40.287904   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:40.288302   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:40.288320   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:40.288594   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:40.288765   29498 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:56:40.291470   29498 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:40.291889   29498 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:40.291913   29498 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:40.292017   29498 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:40.292312   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:40.292361   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:40.307385   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38963
	I0910 17:56:40.307730   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:40.308139   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:40.308161   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:40.308457   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:40.308629   29498 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:56:40.308798   29498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:40.308816   29498 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:56:40.311469   29498 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:40.311915   29498 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:40.311940   29498 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:40.312089   29498 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:56:40.312247   29498 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:56:40.312382   29498 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:56:40.312485   29498 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	W0910 17:56:40.645266   29498 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:40.645308   29498 retry.go:31] will retry after 218.597308ms: dial tcp 192.168.39.96:22: connect: no route to host
	W0910 17:56:43.937303   29498 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	W0910 17:56:43.937407   29498 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0910 17:56:43.937431   29498 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:43.937444   29498 status.go:257] ha-558946-m02 status: &{Name:ha-558946-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 17:56:43.937469   29498 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:43.937484   29498 status.go:255] checking status of ha-558946-m03 ...
	I0910 17:56:43.937793   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:43.937846   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:43.952436   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34069
	I0910 17:56:43.952828   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:43.953285   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:43.953309   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:43.953614   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:43.953810   29498 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:56:43.955237   29498 status.go:330] ha-558946-m03 host status = "Running" (err=<nil>)
	I0910 17:56:43.955251   29498 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:43.955603   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:43.955644   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:43.970283   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0910 17:56:43.970624   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:43.971081   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:43.971096   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:43.971397   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:43.971645   29498 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:56:43.974389   29498 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:43.974740   29498 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:43.974767   29498 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:43.974869   29498 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:43.975268   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:43.975311   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:43.988973   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0910 17:56:43.989328   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:43.989752   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:43.989772   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:43.990087   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:43.990280   29498 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:56:43.990484   29498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:43.990505   29498 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:56:43.992956   29498 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:43.993356   29498 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:43.993388   29498 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:43.993541   29498 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:56:43.993699   29498 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:56:43.993851   29498 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:56:43.993978   29498 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:56:44.076528   29498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:44.092159   29498 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:44.092183   29498 api_server.go:166] Checking apiserver status ...
	I0910 17:56:44.092213   29498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:44.105469   29498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0910 17:56:44.115719   29498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:44.115767   29498 ssh_runner.go:195] Run: ls
	I0910 17:56:44.120104   29498 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:44.126183   29498 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:44.126205   29498 status.go:422] ha-558946-m03 apiserver status = Running (err=<nil>)
	I0910 17:56:44.126215   29498 status.go:257] ha-558946-m03 status: &{Name:ha-558946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:44.126246   29498 status.go:255] checking status of ha-558946-m04 ...
	I0910 17:56:44.126560   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:44.126600   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:44.141974   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46745
	I0910 17:56:44.142363   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:44.142806   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:44.142824   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:44.143102   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:44.143304   29498 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:56:44.144814   29498 status.go:330] ha-558946-m04 host status = "Running" (err=<nil>)
	I0910 17:56:44.144830   29498 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:44.145143   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:44.145174   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:44.159051   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0910 17:56:44.159393   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:44.159786   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:44.159803   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:44.160183   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:44.160366   29498 main.go:141] libmachine: (ha-558946-m04) Calling .GetIP
	I0910 17:56:44.162892   29498 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:44.163318   29498 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:44.163351   29498 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:44.163530   29498 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:44.163810   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:44.163840   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:44.179084   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42167
	I0910 17:56:44.179493   29498 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:44.179893   29498 main.go:141] libmachine: Using API Version  1
	I0910 17:56:44.179912   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:44.180202   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:44.180398   29498 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 17:56:44.180689   29498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:44.180704   29498 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 17:56:44.183096   29498 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:44.183471   29498 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:44.183502   29498 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:44.183612   29498 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 17:56:44.183759   29498 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 17:56:44.183903   29498 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 17:56:44.184020   29498 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 17:56:44.272284   29498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:44.286373   29498 status.go:257] ha-558946-m04 status: &{Name:ha-558946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr: exit status 3 (3.707039424s)

                                                
                                                
-- stdout --
	ha-558946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-558946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:56:49.352998   29614 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:56:49.353131   29614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:49.353141   29614 out.go:358] Setting ErrFile to fd 2...
	I0910 17:56:49.353145   29614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:49.353307   29614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:56:49.353488   29614 out.go:352] Setting JSON to false
	I0910 17:56:49.353514   29614 mustload.go:65] Loading cluster: ha-558946
	I0910 17:56:49.353616   29614 notify.go:220] Checking for updates...
	I0910 17:56:49.353849   29614 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:56:49.353861   29614 status.go:255] checking status of ha-558946 ...
	I0910 17:56:49.354234   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:49.354291   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:49.372366   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0910 17:56:49.372807   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:49.373486   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:49.373517   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:49.373839   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:49.374014   29614 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:56:49.375653   29614 status.go:330] ha-558946 host status = "Running" (err=<nil>)
	I0910 17:56:49.375670   29614 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:49.375938   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:49.375991   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:49.390838   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43625
	I0910 17:56:49.391253   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:49.391727   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:49.391746   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:49.392030   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:49.392185   29614 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:56:49.394618   29614 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:49.394972   29614 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:49.395005   29614 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:49.395134   29614 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:49.395410   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:49.395440   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:49.410694   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I0910 17:56:49.411025   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:49.411399   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:49.411420   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:49.411750   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:49.411923   29614 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:56:49.412113   29614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:49.412143   29614 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:56:49.414574   29614 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:49.414982   29614 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:49.415015   29614 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:49.415148   29614 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:56:49.415330   29614 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:56:49.415485   29614 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:56:49.415655   29614 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:56:49.496538   29614 ssh_runner.go:195] Run: systemctl --version
	I0910 17:56:49.503321   29614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:49.517589   29614 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:49.517626   29614 api_server.go:166] Checking apiserver status ...
	I0910 17:56:49.517661   29614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:49.531652   29614 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup
	W0910 17:56:49.541432   29614 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:49.541477   29614 ssh_runner.go:195] Run: ls
	I0910 17:56:49.545483   29614 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:49.551514   29614 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:49.551533   29614 status.go:422] ha-558946 apiserver status = Running (err=<nil>)
	I0910 17:56:49.551541   29614 status.go:257] ha-558946 status: &{Name:ha-558946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:49.551557   29614 status.go:255] checking status of ha-558946-m02 ...
	I0910 17:56:49.551861   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:49.551916   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:49.566363   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36529
	I0910 17:56:49.566717   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:49.567172   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:49.567190   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:49.567453   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:49.567621   29614 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:56:49.569137   29614 status.go:330] ha-558946-m02 host status = "Running" (err=<nil>)
	I0910 17:56:49.569161   29614 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:49.569533   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:49.569573   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:49.583772   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0910 17:56:49.584185   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:49.584649   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:49.584666   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:49.584922   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:49.585112   29614 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:56:49.587570   29614 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:49.587921   29614 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:49.587939   29614 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:49.588090   29614 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:49.588489   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:49.588531   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:49.602170   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33407
	I0910 17:56:49.602569   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:49.603003   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:49.603021   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:49.603261   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:49.603426   29614 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:56:49.603583   29614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:49.603604   29614 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:56:49.606215   29614 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:49.606600   29614 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:49.606619   29614 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:49.606726   29614 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:56:49.606865   29614 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:56:49.606974   29614 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:56:49.607068   29614 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	W0910 17:56:52.673430   29614 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	W0910 17:56:52.673551   29614 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0910 17:56:52.673577   29614 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:52.673589   29614 status.go:257] ha-558946-m02 status: &{Name:ha-558946-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 17:56:52.673612   29614 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:56:52.673622   29614 status.go:255] checking status of ha-558946-m03 ...
	I0910 17:56:52.674000   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:52.674040   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:52.689393   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34935
	I0910 17:56:52.689824   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:52.690342   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:52.690371   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:52.690728   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:52.690938   29614 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:56:52.692575   29614 status.go:330] ha-558946-m03 host status = "Running" (err=<nil>)
	I0910 17:56:52.692592   29614 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:52.693022   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:52.693103   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:52.708277   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I0910 17:56:52.708745   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:52.709292   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:52.709313   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:52.709645   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:52.709828   29614 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:56:52.712680   29614 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:52.713142   29614 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:52.713164   29614 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:52.713341   29614 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:56:52.713635   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:52.713673   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:52.729535   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0910 17:56:52.729964   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:52.730461   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:52.730488   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:52.730764   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:52.730924   29614 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:56:52.731150   29614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:52.731165   29614 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:56:52.733563   29614 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:52.734041   29614 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:56:52.734066   29614 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:56:52.734234   29614 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:56:52.734378   29614 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:56:52.734506   29614 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:56:52.734637   29614 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:56:52.816844   29614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:52.830770   29614 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:52.830796   29614 api_server.go:166] Checking apiserver status ...
	I0910 17:56:52.830837   29614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:52.843827   29614 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0910 17:56:52.853002   29614 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:52.853052   29614 ssh_runner.go:195] Run: ls
	I0910 17:56:52.857225   29614 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:52.861563   29614 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:52.861581   29614 status.go:422] ha-558946-m03 apiserver status = Running (err=<nil>)
	I0910 17:56:52.861589   29614 status.go:257] ha-558946-m03 status: &{Name:ha-558946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:52.861602   29614 status.go:255] checking status of ha-558946-m04 ...
	I0910 17:56:52.861929   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:52.861963   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:52.876689   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0910 17:56:52.877148   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:52.877632   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:52.877650   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:52.877964   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:52.878128   29614 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:56:52.879615   29614 status.go:330] ha-558946-m04 host status = "Running" (err=<nil>)
	I0910 17:56:52.879632   29614 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:52.879957   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:52.879988   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:52.893926   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37433
	I0910 17:56:52.894237   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:52.894664   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:52.894683   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:52.894966   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:52.895138   29614 main.go:141] libmachine: (ha-558946-m04) Calling .GetIP
	I0910 17:56:52.897677   29614 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:52.898098   29614 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:52.898130   29614 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:52.898243   29614 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:56:52.898523   29614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:52.898569   29614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:52.912520   29614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33351
	I0910 17:56:52.912887   29614 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:52.913294   29614 main.go:141] libmachine: Using API Version  1
	I0910 17:56:52.913319   29614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:52.913648   29614 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:52.913817   29614 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 17:56:52.913997   29614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:52.914014   29614 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 17:56:52.916602   29614 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:52.917217   29614 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:56:52.917253   29614 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:56:52.917389   29614 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 17:56:52.917549   29614 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 17:56:52.917692   29614 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 17:56:52.917812   29614 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 17:56:53.004579   29614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:53.020527   29614 status.go:257] ha-558946-m04 status: &{Name:ha-558946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr: exit status 3 (3.730979064s)

                                                
                                                
-- stdout --
	ha-558946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-558946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:56:58.784499   29730 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:56:58.784589   29730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:58.784593   29730 out.go:358] Setting ErrFile to fd 2...
	I0910 17:56:58.784597   29730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:56:58.784754   29730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:56:58.784917   29730 out.go:352] Setting JSON to false
	I0910 17:56:58.784948   29730 mustload.go:65] Loading cluster: ha-558946
	I0910 17:56:58.785044   29730 notify.go:220] Checking for updates...
	I0910 17:56:58.785307   29730 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:56:58.785325   29730 status.go:255] checking status of ha-558946 ...
	I0910 17:56:58.785697   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:58.785749   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:58.805004   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41477
	I0910 17:56:58.805514   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:58.806061   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:56:58.806086   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:58.806447   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:58.806700   29730 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:56:58.808364   29730 status.go:330] ha-558946 host status = "Running" (err=<nil>)
	I0910 17:56:58.808381   29730 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:58.808703   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:58.808751   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:58.823063   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37439
	I0910 17:56:58.823442   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:58.823887   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:56:58.823904   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:58.824195   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:58.824362   29730 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:56:58.826811   29730 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:58.827199   29730 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:58.827230   29730 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:58.827332   29730 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:56:58.827622   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:58.827651   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:58.841765   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0910 17:56:58.842117   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:58.842711   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:56:58.842756   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:58.843032   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:58.843208   29730 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:56:58.843368   29730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:58.843385   29730 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:56:58.845872   29730 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:58.846353   29730 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:56:58.846380   29730 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:56:58.846528   29730 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:56:58.846689   29730 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:56:58.846835   29730 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:56:58.846956   29730 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:56:58.928861   29730 ssh_runner.go:195] Run: systemctl --version
	I0910 17:56:58.939147   29730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:56:58.954072   29730 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:56:58.954099   29730 api_server.go:166] Checking apiserver status ...
	I0910 17:56:58.954131   29730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:56:58.968005   29730 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup
	W0910 17:56:58.978896   29730 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:56:58.978931   29730 ssh_runner.go:195] Run: ls
	I0910 17:56:58.983519   29730 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:56:58.987550   29730 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:56:58.987569   29730 status.go:422] ha-558946 apiserver status = Running (err=<nil>)
	I0910 17:56:58.987578   29730 status.go:257] ha-558946 status: &{Name:ha-558946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:56:58.987596   29730 status.go:255] checking status of ha-558946-m02 ...
	I0910 17:56:58.987890   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:58.987928   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:59.002396   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0910 17:56:59.002765   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:59.003155   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:56:59.003171   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:59.003505   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:59.003658   29730 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:56:59.005181   29730 status.go:330] ha-558946-m02 host status = "Running" (err=<nil>)
	I0910 17:56:59.005198   29730 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:59.005579   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:59.005622   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:59.020452   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
	I0910 17:56:59.020819   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:59.021288   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:56:59.021303   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:59.021586   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:59.021779   29730 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:56:59.024385   29730 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:59.024819   29730 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:59.024855   29730 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:59.024989   29730 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 17:56:59.025323   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:56:59.025358   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:56:59.039805   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0910 17:56:59.040121   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:56:59.040543   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:56:59.040559   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:56:59.040823   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:56:59.041012   29730 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:56:59.041207   29730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:56:59.041235   29730 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:56:59.043500   29730 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:59.043853   29730 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:56:59.043875   29730 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:56:59.044019   29730 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:56:59.044187   29730 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:56:59.044317   29730 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:56:59.044453   29730 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	W0910 17:57:02.113310   29730 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	W0910 17:57:02.113403   29730 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0910 17:57:02.113422   29730 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:57:02.113430   29730 status.go:257] ha-558946-m02 status: &{Name:ha-558946-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0910 17:57:02.113465   29730 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0910 17:57:02.113473   29730 status.go:255] checking status of ha-558946-m03 ...
	I0910 17:57:02.113777   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:02.113813   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:02.128805   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
	I0910 17:57:02.129301   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:02.129835   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:57:02.129861   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:02.130201   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:02.130382   29730 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:57:02.132159   29730 status.go:330] ha-558946-m03 host status = "Running" (err=<nil>)
	I0910 17:57:02.132179   29730 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:57:02.132496   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:02.132561   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:02.146616   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0910 17:57:02.147013   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:02.147408   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:57:02.147426   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:02.147706   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:02.147928   29730 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:57:02.150832   29730 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:02.151196   29730 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:57:02.151232   29730 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:02.151385   29730 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:57:02.151698   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:02.151755   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:02.166575   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I0910 17:57:02.166967   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:02.167391   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:57:02.167413   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:02.167840   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:02.168057   29730 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:57:02.168271   29730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:57:02.168290   29730 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:57:02.170974   29730 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:02.171411   29730 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:57:02.171443   29730 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:02.171595   29730 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:57:02.171752   29730 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:57:02.171920   29730 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:57:02.172054   29730 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:57:02.261791   29730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:57:02.277087   29730 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:57:02.277114   29730 api_server.go:166] Checking apiserver status ...
	I0910 17:57:02.277153   29730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:57:02.290982   29730 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0910 17:57:02.300839   29730 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:57:02.300895   29730 ssh_runner.go:195] Run: ls
	I0910 17:57:02.305485   29730 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:57:02.313023   29730 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:57:02.313046   29730 status.go:422] ha-558946-m03 apiserver status = Running (err=<nil>)
	I0910 17:57:02.313056   29730 status.go:257] ha-558946-m03 status: &{Name:ha-558946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:57:02.313099   29730 status.go:255] checking status of ha-558946-m04 ...
	I0910 17:57:02.313487   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:02.313535   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:02.328825   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36691
	I0910 17:57:02.329246   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:02.329740   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:57:02.329758   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:02.330031   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:02.330212   29730 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:57:02.331639   29730 status.go:330] ha-558946-m04 host status = "Running" (err=<nil>)
	I0910 17:57:02.331656   29730 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:57:02.331925   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:02.331965   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:02.346115   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37563
	I0910 17:57:02.346451   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:02.346815   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:57:02.346833   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:02.347080   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:02.347248   29730 main.go:141] libmachine: (ha-558946-m04) Calling .GetIP
	I0910 17:57:02.349751   29730 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:02.350126   29730 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:57:02.350167   29730 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:02.350301   29730 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:57:02.350692   29730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:02.350732   29730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:02.365392   29730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0910 17:57:02.365829   29730 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:02.366266   29730 main.go:141] libmachine: Using API Version  1
	I0910 17:57:02.366306   29730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:02.366609   29730 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:02.366759   29730 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 17:57:02.366942   29730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:57:02.366960   29730 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 17:57:02.369953   29730 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:02.370475   29730 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:57:02.370494   29730 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:02.370744   29730 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 17:57:02.371027   29730 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 17:57:02.371205   29730 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 17:57:02.371332   29730 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 17:57:02.460240   29730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:57:02.473918   29730 status.go:257] ha-558946-m04 status: &{Name:ha-558946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr: exit status 7 (618.963737ms)

                                                
                                                
-- stdout --
	ha-558946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-558946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:57:12.883714   29883 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:57:12.883856   29883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:57:12.883868   29883 out.go:358] Setting ErrFile to fd 2...
	I0910 17:57:12.883875   29883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:57:12.884117   29883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:57:12.884323   29883 out.go:352] Setting JSON to false
	I0910 17:57:12.884360   29883 mustload.go:65] Loading cluster: ha-558946
	I0910 17:57:12.884461   29883 notify.go:220] Checking for updates...
	I0910 17:57:12.884869   29883 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:57:12.884889   29883 status.go:255] checking status of ha-558946 ...
	I0910 17:57:12.885524   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:12.885586   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:12.904386   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44395
	I0910 17:57:12.904793   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:12.905368   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:12.905429   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:12.905826   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:12.906037   29883 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:57:12.907734   29883 status.go:330] ha-558946 host status = "Running" (err=<nil>)
	I0910 17:57:12.907751   29883 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:57:12.908033   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:12.908079   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:12.922865   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I0910 17:57:12.923240   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:12.923727   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:12.923744   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:12.924063   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:12.924247   29883 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:57:12.926804   29883 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:57:12.927209   29883 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:57:12.927238   29883 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:57:12.927358   29883 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:57:12.927683   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:12.927730   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:12.941624   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0910 17:57:12.942096   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:12.942611   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:12.942636   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:12.942938   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:12.943078   29883 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:57:12.943273   29883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:57:12.943312   29883 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:57:12.945928   29883 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:57:12.946306   29883 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:57:12.946328   29883 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:57:12.946426   29883 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:57:12.946557   29883 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:57:12.946705   29883 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:57:12.946822   29883 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:57:13.033082   29883 ssh_runner.go:195] Run: systemctl --version
	I0910 17:57:13.039253   29883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:57:13.053666   29883 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:57:13.053699   29883 api_server.go:166] Checking apiserver status ...
	I0910 17:57:13.053742   29883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:57:13.067368   29883 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup
	W0910 17:57:13.076584   29883 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1143/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:57:13.076623   29883 ssh_runner.go:195] Run: ls
	I0910 17:57:13.081044   29883 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:57:13.085160   29883 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:57:13.085179   29883 status.go:422] ha-558946 apiserver status = Running (err=<nil>)
	I0910 17:57:13.085191   29883 status.go:257] ha-558946 status: &{Name:ha-558946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:57:13.085211   29883 status.go:255] checking status of ha-558946-m02 ...
	I0910 17:57:13.085541   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:13.085580   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:13.100007   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I0910 17:57:13.100439   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:13.100841   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:13.100858   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:13.101119   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:13.101262   29883 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:57:13.102787   29883 status.go:330] ha-558946-m02 host status = "Stopped" (err=<nil>)
	I0910 17:57:13.102799   29883 status.go:343] host is not running, skipping remaining checks
	I0910 17:57:13.102807   29883 status.go:257] ha-558946-m02 status: &{Name:ha-558946-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:57:13.102826   29883 status.go:255] checking status of ha-558946-m03 ...
	I0910 17:57:13.103104   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:13.103149   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:13.117983   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0910 17:57:13.118445   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:13.118933   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:13.118967   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:13.119248   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:13.119446   29883 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:57:13.120847   29883 status.go:330] ha-558946-m03 host status = "Running" (err=<nil>)
	I0910 17:57:13.120862   29883 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:57:13.121273   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:13.121316   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:13.136552   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
	I0910 17:57:13.136954   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:13.137397   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:13.137423   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:13.137683   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:13.137852   29883 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:57:13.140265   29883 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:13.140681   29883 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:57:13.140707   29883 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:13.140789   29883 host.go:66] Checking if "ha-558946-m03" exists ...
	I0910 17:57:13.141216   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:13.141255   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:13.156423   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41019
	I0910 17:57:13.156735   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:13.157249   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:13.157285   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:13.157567   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:13.157747   29883 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:57:13.157935   29883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:57:13.157955   29883 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:57:13.160242   29883 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:13.160654   29883 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:57:13.160682   29883 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:13.160794   29883 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:57:13.160956   29883 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:57:13.161113   29883 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:57:13.161240   29883 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:57:13.249102   29883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:57:13.266445   29883 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 17:57:13.266469   29883 api_server.go:166] Checking apiserver status ...
	I0910 17:57:13.266503   29883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:57:13.284515   29883 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0910 17:57:13.294728   29883 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:57:13.294779   29883 ssh_runner.go:195] Run: ls
	I0910 17:57:13.299266   29883 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:57:13.303765   29883 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:57:13.303788   29883 status.go:422] ha-558946-m03 apiserver status = Running (err=<nil>)
	I0910 17:57:13.303799   29883 status.go:257] ha-558946-m03 status: &{Name:ha-558946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:57:13.303834   29883 status.go:255] checking status of ha-558946-m04 ...
	I0910 17:57:13.304134   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:13.304166   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:13.319888   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0910 17:57:13.320306   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:13.320797   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:13.320816   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:13.321122   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:13.321329   29883 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:57:13.322869   29883 status.go:330] ha-558946-m04 host status = "Running" (err=<nil>)
	I0910 17:57:13.322884   29883 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:57:13.323172   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:13.323209   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:13.337270   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40831
	I0910 17:57:13.337574   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:13.337935   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:13.337948   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:13.338193   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:13.338354   29883 main.go:141] libmachine: (ha-558946-m04) Calling .GetIP
	I0910 17:57:13.340699   29883 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:13.341121   29883 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:57:13.341147   29883 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:13.341225   29883 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 17:57:13.341495   29883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:13.341532   29883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:13.355131   29883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32785
	I0910 17:57:13.355546   29883 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:13.356010   29883 main.go:141] libmachine: Using API Version  1
	I0910 17:57:13.356030   29883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:13.356272   29883 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:13.356445   29883 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 17:57:13.356613   29883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:57:13.356630   29883 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 17:57:13.359208   29883 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:13.359535   29883 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:57:13.359573   29883 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:13.359722   29883 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 17:57:13.359871   29883 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 17:57:13.360006   29883 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 17:57:13.360140   29883 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 17:57:13.445022   29883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:57:13.461137   29883 status.go:257] ha-558946-m04 status: &{Name:ha-558946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-558946 -n ha-558946
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-558946 logs -n 25: (1.339174802s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946:/home/docker/cp-test_ha-558946-m03_ha-558946.txt                       |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946 sudo cat                                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946.txt                                 |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m02:/home/docker/cp-test_ha-558946-m03_ha-558946-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m02 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04:/home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m04 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp testdata/cp-test.txt                                                | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1374294018/001/cp-test_ha-558946-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946:/home/docker/cp-test_ha-558946-m04_ha-558946.txt                       |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946 sudo cat                                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946.txt                                 |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m02:/home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m02 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03:/home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m03 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-558946 node stop m02 -v=7                                                     | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-558946 node start m02 -v=7                                                    | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:49:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:49:39.086967   24502 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:49:39.087076   24502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:39.087088   24502 out.go:358] Setting ErrFile to fd 2...
	I0910 17:49:39.087093   24502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:39.087295   24502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:49:39.087922   24502 out.go:352] Setting JSON to false
	I0910 17:49:39.088839   24502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1931,"bootTime":1725988648,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:49:39.088900   24502 start.go:139] virtualization: kvm guest
	I0910 17:49:39.090775   24502 out.go:177] * [ha-558946] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:49:39.091795   24502 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:49:39.091834   24502 notify.go:220] Checking for updates...
	I0910 17:49:39.093979   24502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:49:39.095078   24502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:49:39.096084   24502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:49:39.097036   24502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:49:39.098065   24502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:49:39.099338   24502 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:49:39.132527   24502 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 17:49:39.133697   24502 start.go:297] selected driver: kvm2
	I0910 17:49:39.133707   24502 start.go:901] validating driver "kvm2" against <nil>
	I0910 17:49:39.133716   24502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:49:39.134329   24502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:49:39.134391   24502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:49:39.148496   24502 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:49:39.148548   24502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:49:39.148733   24502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:49:39.148762   24502 cni.go:84] Creating CNI manager for ""
	I0910 17:49:39.148768   24502 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0910 17:49:39.148775   24502 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 17:49:39.148813   24502 start.go:340] cluster config:
	{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0910 17:49:39.148892   24502 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:49:39.150381   24502 out.go:177] * Starting "ha-558946" primary control-plane node in "ha-558946" cluster
	I0910 17:49:39.151311   24502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:49:39.151349   24502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:49:39.151357   24502 cache.go:56] Caching tarball of preloaded images
	I0910 17:49:39.151422   24502 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:49:39.151432   24502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:49:39.151708   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:49:39.151728   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json: {Name:mkfc34283f0a4aac201e0c3ede39cbef107c60af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:49:39.151850   24502 start.go:360] acquireMachinesLock for ha-558946: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:49:39.151876   24502 start.go:364] duration metric: took 14.944µs to acquireMachinesLock for "ha-558946"
	I0910 17:49:39.151892   24502 start.go:93] Provisioning new machine with config: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:49:39.151937   24502 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 17:49:39.154101   24502 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 17:49:39.154205   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:49:39.154246   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:49:39.167763   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0910 17:49:39.168154   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:49:39.168659   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:49:39.168682   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:49:39.168967   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:49:39.169149   24502 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:49:39.169300   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:49:39.169443   24502 start.go:159] libmachine.API.Create for "ha-558946" (driver="kvm2")
	I0910 17:49:39.169469   24502 client.go:168] LocalClient.Create starting
	I0910 17:49:39.169498   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 17:49:39.169532   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:49:39.169548   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:49:39.169615   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 17:49:39.169640   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:49:39.169656   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:49:39.169689   24502 main.go:141] libmachine: Running pre-create checks...
	I0910 17:49:39.169715   24502 main.go:141] libmachine: (ha-558946) Calling .PreCreateCheck
	I0910 17:49:39.170013   24502 main.go:141] libmachine: (ha-558946) Calling .GetConfigRaw
	I0910 17:49:39.170340   24502 main.go:141] libmachine: Creating machine...
	I0910 17:49:39.170352   24502 main.go:141] libmachine: (ha-558946) Calling .Create
	I0910 17:49:39.170460   24502 main.go:141] libmachine: (ha-558946) Creating KVM machine...
	I0910 17:49:39.171642   24502 main.go:141] libmachine: (ha-558946) DBG | found existing default KVM network
	I0910 17:49:39.172283   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.172172   24525 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0910 17:49:39.172333   24502 main.go:141] libmachine: (ha-558946) DBG | created network xml: 
	I0910 17:49:39.172352   24502 main.go:141] libmachine: (ha-558946) DBG | <network>
	I0910 17:49:39.172374   24502 main.go:141] libmachine: (ha-558946) DBG |   <name>mk-ha-558946</name>
	I0910 17:49:39.172387   24502 main.go:141] libmachine: (ha-558946) DBG |   <dns enable='no'/>
	I0910 17:49:39.172397   24502 main.go:141] libmachine: (ha-558946) DBG |   
	I0910 17:49:39.172408   24502 main.go:141] libmachine: (ha-558946) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0910 17:49:39.172417   24502 main.go:141] libmachine: (ha-558946) DBG |     <dhcp>
	I0910 17:49:39.172433   24502 main.go:141] libmachine: (ha-558946) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0910 17:49:39.172444   24502 main.go:141] libmachine: (ha-558946) DBG |     </dhcp>
	I0910 17:49:39.172454   24502 main.go:141] libmachine: (ha-558946) DBG |   </ip>
	I0910 17:49:39.172462   24502 main.go:141] libmachine: (ha-558946) DBG |   
	I0910 17:49:39.172471   24502 main.go:141] libmachine: (ha-558946) DBG | </network>
	I0910 17:49:39.172477   24502 main.go:141] libmachine: (ha-558946) DBG | 
	I0910 17:49:39.176861   24502 main.go:141] libmachine: (ha-558946) DBG | trying to create private KVM network mk-ha-558946 192.168.39.0/24...
	I0910 17:49:39.239779   24502 main.go:141] libmachine: (ha-558946) DBG | private KVM network mk-ha-558946 192.168.39.0/24 created
	I0910 17:49:39.239809   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.239740   24525 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:49:39.239837   24502 main.go:141] libmachine: (ha-558946) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946 ...
	I0910 17:49:39.239857   24502 main.go:141] libmachine: (ha-558946) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:49:39.239871   24502 main.go:141] libmachine: (ha-558946) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:49:39.479765   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.479649   24525 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa...
	I0910 17:49:39.643695   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.643588   24525 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/ha-558946.rawdisk...
	I0910 17:49:39.643718   24502 main.go:141] libmachine: (ha-558946) DBG | Writing magic tar header
	I0910 17:49:39.643731   24502 main.go:141] libmachine: (ha-558946) DBG | Writing SSH key tar header
	I0910 17:49:39.643742   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:39.643695   24525 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946 ...
	I0910 17:49:39.643824   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946
	I0910 17:49:39.643862   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946 (perms=drwx------)
	I0910 17:49:39.643873   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 17:49:39.643888   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:49:39.643902   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 17:49:39.643912   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:49:39.643924   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:49:39.643934   24502 main.go:141] libmachine: (ha-558946) DBG | Checking permissions on dir: /home
	I0910 17:49:39.643945   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:49:39.643956   24502 main.go:141] libmachine: (ha-558946) DBG | Skipping /home - not owner
	I0910 17:49:39.643994   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 17:49:39.644020   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 17:49:39.644029   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:49:39.644042   24502 main.go:141] libmachine: (ha-558946) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:49:39.644056   24502 main.go:141] libmachine: (ha-558946) Creating domain...
	I0910 17:49:39.644855   24502 main.go:141] libmachine: (ha-558946) define libvirt domain using xml: 
	I0910 17:49:39.644875   24502 main.go:141] libmachine: (ha-558946) <domain type='kvm'>
	I0910 17:49:39.644884   24502 main.go:141] libmachine: (ha-558946)   <name>ha-558946</name>
	I0910 17:49:39.644899   24502 main.go:141] libmachine: (ha-558946)   <memory unit='MiB'>2200</memory>
	I0910 17:49:39.644911   24502 main.go:141] libmachine: (ha-558946)   <vcpu>2</vcpu>
	I0910 17:49:39.644921   24502 main.go:141] libmachine: (ha-558946)   <features>
	I0910 17:49:39.644930   24502 main.go:141] libmachine: (ha-558946)     <acpi/>
	I0910 17:49:39.644941   24502 main.go:141] libmachine: (ha-558946)     <apic/>
	I0910 17:49:39.644948   24502 main.go:141] libmachine: (ha-558946)     <pae/>
	I0910 17:49:39.644958   24502 main.go:141] libmachine: (ha-558946)     
	I0910 17:49:39.644965   24502 main.go:141] libmachine: (ha-558946)   </features>
	I0910 17:49:39.644981   24502 main.go:141] libmachine: (ha-558946)   <cpu mode='host-passthrough'>
	I0910 17:49:39.645005   24502 main.go:141] libmachine: (ha-558946)   
	I0910 17:49:39.645024   24502 main.go:141] libmachine: (ha-558946)   </cpu>
	I0910 17:49:39.645044   24502 main.go:141] libmachine: (ha-558946)   <os>
	I0910 17:49:39.645060   24502 main.go:141] libmachine: (ha-558946)     <type>hvm</type>
	I0910 17:49:39.645093   24502 main.go:141] libmachine: (ha-558946)     <boot dev='cdrom'/>
	I0910 17:49:39.645108   24502 main.go:141] libmachine: (ha-558946)     <boot dev='hd'/>
	I0910 17:49:39.645121   24502 main.go:141] libmachine: (ha-558946)     <bootmenu enable='no'/>
	I0910 17:49:39.645130   24502 main.go:141] libmachine: (ha-558946)   </os>
	I0910 17:49:39.645141   24502 main.go:141] libmachine: (ha-558946)   <devices>
	I0910 17:49:39.645152   24502 main.go:141] libmachine: (ha-558946)     <disk type='file' device='cdrom'>
	I0910 17:49:39.645167   24502 main.go:141] libmachine: (ha-558946)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/boot2docker.iso'/>
	I0910 17:49:39.645175   24502 main.go:141] libmachine: (ha-558946)       <target dev='hdc' bus='scsi'/>
	I0910 17:49:39.645199   24502 main.go:141] libmachine: (ha-558946)       <readonly/>
	I0910 17:49:39.645220   24502 main.go:141] libmachine: (ha-558946)     </disk>
	I0910 17:49:39.645234   24502 main.go:141] libmachine: (ha-558946)     <disk type='file' device='disk'>
	I0910 17:49:39.645245   24502 main.go:141] libmachine: (ha-558946)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:49:39.645258   24502 main.go:141] libmachine: (ha-558946)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/ha-558946.rawdisk'/>
	I0910 17:49:39.645272   24502 main.go:141] libmachine: (ha-558946)       <target dev='hda' bus='virtio'/>
	I0910 17:49:39.645285   24502 main.go:141] libmachine: (ha-558946)     </disk>
	I0910 17:49:39.645301   24502 main.go:141] libmachine: (ha-558946)     <interface type='network'>
	I0910 17:49:39.645324   24502 main.go:141] libmachine: (ha-558946)       <source network='mk-ha-558946'/>
	I0910 17:49:39.645344   24502 main.go:141] libmachine: (ha-558946)       <model type='virtio'/>
	I0910 17:49:39.645355   24502 main.go:141] libmachine: (ha-558946)     </interface>
	I0910 17:49:39.645370   24502 main.go:141] libmachine: (ha-558946)     <interface type='network'>
	I0910 17:49:39.645399   24502 main.go:141] libmachine: (ha-558946)       <source network='default'/>
	I0910 17:49:39.645422   24502 main.go:141] libmachine: (ha-558946)       <model type='virtio'/>
	I0910 17:49:39.645436   24502 main.go:141] libmachine: (ha-558946)     </interface>
	I0910 17:49:39.645447   24502 main.go:141] libmachine: (ha-558946)     <serial type='pty'>
	I0910 17:49:39.645457   24502 main.go:141] libmachine: (ha-558946)       <target port='0'/>
	I0910 17:49:39.645479   24502 main.go:141] libmachine: (ha-558946)     </serial>
	I0910 17:49:39.645496   24502 main.go:141] libmachine: (ha-558946)     <console type='pty'>
	I0910 17:49:39.645506   24502 main.go:141] libmachine: (ha-558946)       <target type='serial' port='0'/>
	I0910 17:49:39.645528   24502 main.go:141] libmachine: (ha-558946)     </console>
	I0910 17:49:39.645543   24502 main.go:141] libmachine: (ha-558946)     <rng model='virtio'>
	I0910 17:49:39.645556   24502 main.go:141] libmachine: (ha-558946)       <backend model='random'>/dev/random</backend>
	I0910 17:49:39.645566   24502 main.go:141] libmachine: (ha-558946)     </rng>
	I0910 17:49:39.645577   24502 main.go:141] libmachine: (ha-558946)     
	I0910 17:49:39.645587   24502 main.go:141] libmachine: (ha-558946)     
	I0910 17:49:39.645599   24502 main.go:141] libmachine: (ha-558946)   </devices>
	I0910 17:49:39.645610   24502 main.go:141] libmachine: (ha-558946) </domain>
	I0910 17:49:39.645622   24502 main.go:141] libmachine: (ha-558946) 
	I0910 17:49:39.649700   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:4b:55:87 in network default
	I0910 17:49:39.650271   24502 main.go:141] libmachine: (ha-558946) Ensuring networks are active...
	I0910 17:49:39.650287   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:39.650919   24502 main.go:141] libmachine: (ha-558946) Ensuring network default is active
	I0910 17:49:39.651172   24502 main.go:141] libmachine: (ha-558946) Ensuring network mk-ha-558946 is active
	I0910 17:49:39.651721   24502 main.go:141] libmachine: (ha-558946) Getting domain xml...
	I0910 17:49:39.652420   24502 main.go:141] libmachine: (ha-558946) Creating domain...
	I0910 17:49:40.822021   24502 main.go:141] libmachine: (ha-558946) Waiting to get IP...
	I0910 17:49:40.822641   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:40.822977   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:40.822997   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:40.822958   24525 retry.go:31] will retry after 296.730328ms: waiting for machine to come up
	I0910 17:49:41.121296   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:41.121685   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:41.121714   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:41.121652   24525 retry.go:31] will retry after 247.649187ms: waiting for machine to come up
	I0910 17:49:41.371076   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:41.371451   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:41.371482   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:41.371402   24525 retry.go:31] will retry after 367.998904ms: waiting for machine to come up
	I0910 17:49:41.740855   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:41.741278   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:41.741305   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:41.741226   24525 retry.go:31] will retry after 448.475273ms: waiting for machine to come up
	I0910 17:49:42.190603   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:42.190989   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:42.191013   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:42.190948   24525 retry.go:31] will retry after 694.285595ms: waiting for machine to come up
	I0910 17:49:42.886793   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:42.887139   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:42.887170   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:42.887112   24525 retry.go:31] will retry after 616.508694ms: waiting for machine to come up
	I0910 17:49:43.504695   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:43.505032   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:43.505058   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:43.504998   24525 retry.go:31] will retry after 1.006459093s: waiting for machine to come up
	I0910 17:49:44.512694   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:44.513136   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:44.513164   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:44.513091   24525 retry.go:31] will retry after 1.034183837s: waiting for machine to come up
	I0910 17:49:45.548509   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:45.548883   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:45.548910   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:45.548832   24525 retry.go:31] will retry after 1.839305323s: waiting for machine to come up
	I0910 17:49:47.390674   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:47.391133   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:47.391157   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:47.391056   24525 retry.go:31] will retry after 1.664309448s: waiting for machine to come up
	I0910 17:49:49.057865   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:49.058330   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:49.058356   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:49.058260   24525 retry.go:31] will retry after 1.942449004s: waiting for machine to come up
	I0910 17:49:51.002278   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:51.002667   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:51.002692   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:51.002634   24525 retry.go:31] will retry after 3.010752626s: waiting for machine to come up
	I0910 17:49:54.014576   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:54.014962   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:54.014991   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:54.014932   24525 retry.go:31] will retry after 3.22703265s: waiting for machine to come up
	I0910 17:49:57.245619   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:49:57.246008   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find current IP address of domain ha-558946 in network mk-ha-558946
	I0910 17:49:57.246033   24502 main.go:141] libmachine: (ha-558946) DBG | I0910 17:49:57.245978   24525 retry.go:31] will retry after 4.311890961s: waiting for machine to come up
	I0910 17:50:01.561029   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.561445   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has current primary IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.561464   24502 main.go:141] libmachine: (ha-558946) Found IP for machine: 192.168.39.109
	I0910 17:50:01.561477   24502 main.go:141] libmachine: (ha-558946) Reserving static IP address...
	I0910 17:50:01.561854   24502 main.go:141] libmachine: (ha-558946) DBG | unable to find host DHCP lease matching {name: "ha-558946", mac: "52:54:00:19:8f:4f", ip: "192.168.39.109"} in network mk-ha-558946
	I0910 17:50:01.629833   24502 main.go:141] libmachine: (ha-558946) DBG | Getting to WaitForSSH function...
	I0910 17:50:01.629864   24502 main.go:141] libmachine: (ha-558946) Reserved static IP address: 192.168.39.109
	I0910 17:50:01.629879   24502 main.go:141] libmachine: (ha-558946) Waiting for SSH to be available...
	I0910 17:50:01.632245   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.632658   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:01.632684   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.632836   24502 main.go:141] libmachine: (ha-558946) DBG | Using SSH client type: external
	I0910 17:50:01.632862   24502 main.go:141] libmachine: (ha-558946) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa (-rw-------)
	I0910 17:50:01.632904   24502 main.go:141] libmachine: (ha-558946) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:50:01.632919   24502 main.go:141] libmachine: (ha-558946) DBG | About to run SSH command:
	I0910 17:50:01.632947   24502 main.go:141] libmachine: (ha-558946) DBG | exit 0
	I0910 17:50:01.757218   24502 main.go:141] libmachine: (ha-558946) DBG | SSH cmd err, output: <nil>: 
	I0910 17:50:01.757663   24502 main.go:141] libmachine: (ha-558946) KVM machine creation complete!
	I0910 17:50:01.758039   24502 main.go:141] libmachine: (ha-558946) Calling .GetConfigRaw
	I0910 17:50:01.758694   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:01.758891   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:01.759070   24502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:50:01.759103   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:01.760480   24502 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:50:01.760495   24502 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:50:01.760500   24502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:50:01.760505   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:01.762521   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.762818   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:01.762840   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.762969   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:01.763129   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.763273   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.763389   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:01.763558   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:01.763736   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:01.763747   24502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:50:01.868652   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:50:01.868672   24502 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:50:01.868679   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:01.871336   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.871635   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:01.871671   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.871823   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:01.872030   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.872173   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.872333   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:01.872499   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:01.872667   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:01.872681   24502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:50:01.977579   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:50:01.977675   24502 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:50:01.977690   24502 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:50:01.977703   24502 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:50:01.977941   24502 buildroot.go:166] provisioning hostname "ha-558946"
	I0910 17:50:01.977962   24502 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:50:01.978147   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:01.980520   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.980849   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:01.980867   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:01.981010   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:01.981243   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.981430   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:01.981565   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:01.981722   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:01.981898   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:01.981913   24502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-558946 && echo "ha-558946" | sudo tee /etc/hostname
	I0910 17:50:02.099018   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946
	
	I0910 17:50:02.099048   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.101744   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.102095   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.102122   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.102297   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:02.102444   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.102584   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.102706   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:02.102827   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:02.103035   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:02.103053   24502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-558946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-558946/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-558946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:50:02.213905   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:50:02.213934   24502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:50:02.213973   24502 buildroot.go:174] setting up certificates
	I0910 17:50:02.213982   24502 provision.go:84] configureAuth start
	I0910 17:50:02.213991   24502 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:50:02.214288   24502 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:50:02.216720   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.217142   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.217171   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.217361   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.219240   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.219515   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.219549   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.219625   24502 provision.go:143] copyHostCerts
	I0910 17:50:02.219663   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:50:02.219722   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 17:50:02.219733   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:50:02.219819   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:50:02.219925   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:50:02.219945   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 17:50:02.219952   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:50:02.219977   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:50:02.220032   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:50:02.220047   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 17:50:02.220053   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:50:02.220075   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:50:02.220131   24502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.ha-558946 san=[127.0.0.1 192.168.39.109 ha-558946 localhost minikube]
	I0910 17:50:02.548645   24502 provision.go:177] copyRemoteCerts
	I0910 17:50:02.548693   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:50:02.548713   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.551327   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.551634   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.551653   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.551829   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:02.552021   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.552155   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:02.552283   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:02.634777   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 17:50:02.634840   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0910 17:50:02.659335   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 17:50:02.659396   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 17:50:02.682832   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 17:50:02.682905   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:50:02.705354   24502 provision.go:87] duration metric: took 491.359768ms to configureAuth
	I0910 17:50:02.705380   24502 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:50:02.705582   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:02.705664   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.707934   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.708274   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.708300   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.708465   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:02.708655   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.708815   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.708931   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:02.709106   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:02.709393   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:02.709417   24502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 17:50:02.924108   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 17:50:02.924134   24502 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:50:02.924145   24502 main.go:141] libmachine: (ha-558946) Calling .GetURL
	I0910 17:50:02.925196   24502 main.go:141] libmachine: (ha-558946) DBG | Using libvirt version 6000000
	I0910 17:50:02.927214   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.927556   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.927582   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.927763   24502 main.go:141] libmachine: Docker is up and running!
	I0910 17:50:02.927776   24502 main.go:141] libmachine: Reticulating splines...
	I0910 17:50:02.927783   24502 client.go:171] duration metric: took 23.758306556s to LocalClient.Create
	I0910 17:50:02.927804   24502 start.go:167] duration metric: took 23.758360536s to libmachine.API.Create "ha-558946"
	I0910 17:50:02.927815   24502 start.go:293] postStartSetup for "ha-558946" (driver="kvm2")
	I0910 17:50:02.927827   24502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:50:02.927847   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:02.928053   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:50:02.928072   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:02.929894   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.930215   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:02.930244   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:02.930336   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:02.930498   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:02.930642   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:02.930800   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:03.011064   24502 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:50:03.015180   24502 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:50:03.015199   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 17:50:03.015261   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 17:50:03.015339   24502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 17:50:03.015350   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 17:50:03.015435   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 17:50:03.024242   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:50:03.047404   24502 start.go:296] duration metric: took 119.576444ms for postStartSetup
	I0910 17:50:03.047451   24502 main.go:141] libmachine: (ha-558946) Calling .GetConfigRaw
	I0910 17:50:03.048018   24502 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:50:03.050509   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.050869   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.050888   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.051134   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:50:03.051298   24502 start.go:128] duration metric: took 23.899351421s to createHost
	I0910 17:50:03.051317   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:03.053313   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.053576   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.053601   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.053715   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:03.053871   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:03.054002   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:03.054092   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:03.054225   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:03.054386   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:50:03.054399   24502 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:50:03.157649   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725990603.133866575
	
	I0910 17:50:03.157667   24502 fix.go:216] guest clock: 1725990603.133866575
	I0910 17:50:03.157674   24502 fix.go:229] Guest: 2024-09-10 17:50:03.133866575 +0000 UTC Remote: 2024-09-10 17:50:03.051308157 +0000 UTC m=+23.997137359 (delta=82.558418ms)
	I0910 17:50:03.157703   24502 fix.go:200] guest clock delta is within tolerance: 82.558418ms
	I0910 17:50:03.157710   24502 start.go:83] releasing machines lock for "ha-558946", held for 24.005824756s
	I0910 17:50:03.157744   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:03.157996   24502 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:50:03.160405   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.160705   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.160733   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.160895   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:03.161301   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:03.161469   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:03.161517   24502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:50:03.161570   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:03.161651   24502 ssh_runner.go:195] Run: cat /version.json
	I0910 17:50:03.161672   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:03.163837   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.164105   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.164124   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.164143   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.164319   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:03.164480   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:03.164618   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:03.164630   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:03.164638   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:03.164774   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:03.164825   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:03.165158   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:03.165330   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:03.165490   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:03.262615   24502 ssh_runner.go:195] Run: systemctl --version
	I0910 17:50:03.268168   24502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 17:50:03.424149   24502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:50:03.431627   24502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:50:03.431728   24502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:50:03.447902   24502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:50:03.447920   24502 start.go:495] detecting cgroup driver to use...
	I0910 17:50:03.447970   24502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:50:03.464681   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:50:03.478344   24502 docker.go:217] disabling cri-docker service (if available) ...
	I0910 17:50:03.478393   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 17:50:03.491617   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 17:50:03.504948   24502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 17:50:03.623678   24502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 17:50:03.777986   24502 docker.go:233] disabling docker service ...
	I0910 17:50:03.778053   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 17:50:03.795678   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 17:50:03.807738   24502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 17:50:03.927114   24502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 17:50:04.046700   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 17:50:04.061573   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:50:04.079740   24502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 17:50:04.079800   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.089945   24502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 17:50:04.090001   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.100275   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.110278   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.120193   24502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:50:04.130323   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.140410   24502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.156505   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:04.166564   24502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:50:04.175577   24502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 17:50:04.175615   24502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 17:50:04.187687   24502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:50:04.197125   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:50:04.314220   24502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 17:50:04.403163   24502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 17:50:04.403227   24502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 17:50:04.407880   24502 start.go:563] Will wait 60s for crictl version
	I0910 17:50:04.407927   24502 ssh_runner.go:195] Run: which crictl
	I0910 17:50:04.411519   24502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:50:04.448166   24502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 17:50:04.448229   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:50:04.475650   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:50:04.505995   24502 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 17:50:04.507159   24502 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:50:04.509693   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:04.510041   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:04.510064   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:04.510257   24502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:50:04.514205   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:50:04.526831   24502 kubeadm.go:883] updating cluster {Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 17:50:04.526952   24502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:50:04.527013   24502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:50:04.561988   24502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 17:50:04.562047   24502 ssh_runner.go:195] Run: which lz4
	I0910 17:50:04.565573   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0910 17:50:04.565650   24502 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 17:50:04.569559   24502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 17:50:04.569581   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 17:50:05.859847   24502 crio.go:462] duration metric: took 1.294220445s to copy over tarball
	I0910 17:50:05.859916   24502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 17:50:07.877493   24502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.017547737s)
	I0910 17:50:07.877524   24502 crio.go:469] duration metric: took 2.017650904s to extract the tarball
	I0910 17:50:07.877533   24502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 17:50:07.914725   24502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 17:50:07.958892   24502 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 17:50:07.958913   24502 cache_images.go:84] Images are preloaded, skipping loading
	I0910 17:50:07.958920   24502 kubeadm.go:934] updating node { 192.168.39.109 8443 v1.31.0 crio true true} ...
	I0910 17:50:07.959026   24502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-558946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:50:07.959104   24502 ssh_runner.go:195] Run: crio config
	I0910 17:50:08.002476   24502 cni.go:84] Creating CNI manager for ""
	I0910 17:50:08.002493   24502 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 17:50:08.002503   24502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 17:50:08.002528   24502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-558946 NodeName:ha-558946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 17:50:08.002673   24502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-558946"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 17:50:08.002696   24502 kube-vip.go:115] generating kube-vip config ...
	I0910 17:50:08.002750   24502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 17:50:08.019635   24502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 17:50:08.019728   24502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0910 17:50:08.019787   24502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:50:08.030022   24502 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 17:50:08.030085   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0910 17:50:08.039779   24502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0910 17:50:08.056653   24502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:50:08.072802   24502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0910 17:50:08.088307   24502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0910 17:50:08.103758   24502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0910 17:50:08.107195   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:50:08.118914   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:50:08.241425   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:50:08.259439   24502 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946 for IP: 192.168.39.109
	I0910 17:50:08.259476   24502 certs.go:194] generating shared ca certs ...
	I0910 17:50:08.259495   24502 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.259673   24502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 17:50:08.259726   24502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 17:50:08.259740   24502 certs.go:256] generating profile certs ...
	I0910 17:50:08.259806   24502 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key
	I0910 17:50:08.259830   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt with IP's: []
	I0910 17:50:08.416618   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt ...
	I0910 17:50:08.416641   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt: {Name:mk02a24e9066514871a2e5b41e9bcd6c7425a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.416791   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key ...
	I0910 17:50:08.416801   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key: {Name:mk0aa9a9e3d6cec45852bec5c42bc0b52d7701b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.416878   24502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.dea87bef
	I0910 17:50:08.416893   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.dea87bef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109 192.168.39.254]
	I0910 17:50:08.652698   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.dea87bef ...
	I0910 17:50:08.652724   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.dea87bef: {Name:mk5e0b96cb3e4be0397b134fb9c806462cb4f639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.652873   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.dea87bef ...
	I0910 17:50:08.652885   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.dea87bef: {Name:mkadf564b2290466f24114dda6ad78ad96425087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.652961   24502 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.dea87bef -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt
	I0910 17:50:08.653045   24502 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.dea87bef -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key
	I0910 17:50:08.653135   24502 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key
	I0910 17:50:08.653155   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt with IP's: []
	I0910 17:50:08.891264   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt ...
	I0910 17:50:08.891293   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt: {Name:mk8c6979845b5ba1e31bbcdbd008b433a414d8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.891475   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key ...
	I0910 17:50:08.891492   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key: {Name:mk2be15a3801bc87359871b239ea8db29babef34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:08.891583   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 17:50:08.891605   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 17:50:08.891623   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 17:50:08.891641   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 17:50:08.891658   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 17:50:08.891674   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 17:50:08.891686   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 17:50:08.891704   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 17:50:08.891763   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 17:50:08.891806   24502 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 17:50:08.891820   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 17:50:08.891854   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:50:08.891886   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:50:08.891915   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 17:50:08.891968   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:50:08.892004   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 17:50:08.892023   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 17:50:08.892041   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:08.892574   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:50:08.920188   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:50:08.950243   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:50:08.980967   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 17:50:09.018266   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 17:50:09.053699   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 17:50:09.078323   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:50:09.102014   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:50:09.126110   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 17:50:09.148643   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 17:50:09.171899   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:50:09.195493   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 17:50:09.211404   24502 ssh_runner.go:195] Run: openssl version
	I0910 17:50:09.217163   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:50:09.227301   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:09.231678   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:09.231725   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:09.237377   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:50:09.247109   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 17:50:09.257172   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 17:50:09.261509   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 17:50:09.261545   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 17:50:09.266952   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 17:50:09.276846   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 17:50:09.286657   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 17:50:09.290950   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 17:50:09.290991   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 17:50:09.296343   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 17:50:09.306132   24502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:50:09.310011   24502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:50:09.310061   24502 kubeadm.go:392] StartCluster: {Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:50:09.310128   24502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 17:50:09.310169   24502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 17:50:09.344121   24502 cri.go:89] found id: ""
	I0910 17:50:09.344178   24502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 17:50:09.353363   24502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 17:50:09.362509   24502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 17:50:09.371504   24502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 17:50:09.371521   24502 kubeadm.go:157] found existing configuration files:
	
	I0910 17:50:09.371562   24502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 17:50:09.380093   24502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 17:50:09.380144   24502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 17:50:09.389132   24502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 17:50:09.397443   24502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 17:50:09.397488   24502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 17:50:09.406163   24502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 17:50:09.414438   24502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 17:50:09.414483   24502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 17:50:09.423200   24502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 17:50:09.431469   24502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 17:50:09.431513   24502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 17:50:09.440172   24502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 17:50:09.554959   24502 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 17:50:09.555099   24502 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 17:50:09.666189   24502 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 17:50:09.666283   24502 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 17:50:09.666367   24502 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 17:50:09.678602   24502 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 17:50:09.709667   24502 out.go:235]   - Generating certificates and keys ...
	I0910 17:50:09.709792   24502 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 17:50:09.709873   24502 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 17:50:09.844596   24502 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 17:50:10.088833   24502 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 17:50:10.178873   24502 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 17:50:10.264095   24502 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 17:50:10.651300   24502 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 17:50:10.651439   24502 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-558946 localhost] and IPs [192.168.39.109 127.0.0.1 ::1]
	I0910 17:50:10.731932   24502 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 17:50:10.732081   24502 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-558946 localhost] and IPs [192.168.39.109 127.0.0.1 ::1]
	I0910 17:50:11.144773   24502 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 17:50:11.316362   24502 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 17:50:11.492676   24502 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 17:50:11.492747   24502 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 17:50:11.653203   24502 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 17:50:11.907502   24502 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 17:50:12.136495   24502 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 17:50:12.348260   24502 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 17:50:12.558229   24502 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 17:50:12.558766   24502 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 17:50:12.563826   24502 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 17:50:12.565756   24502 out.go:235]   - Booting up control plane ...
	I0910 17:50:12.565856   24502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 17:50:12.565965   24502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 17:50:12.566063   24502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 17:50:12.582150   24502 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 17:50:12.590956   24502 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 17:50:12.591011   24502 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 17:50:12.740364   24502 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 17:50:12.740512   24502 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 17:50:13.740525   24502 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000862182s
	I0910 17:50:13.740620   24502 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 17:50:19.562935   24502 kubeadm.go:310] [api-check] The API server is healthy after 5.825318755s
	I0910 17:50:19.578088   24502 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 17:50:19.596127   24502 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 17:50:19.634765   24502 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 17:50:19.634949   24502 kubeadm.go:310] [mark-control-plane] Marking the node ha-558946 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 17:50:19.646641   24502 kubeadm.go:310] [bootstrap-token] Using token: 6pfcgw.55ya2kbllqozh475
	I0910 17:50:19.648086   24502 out.go:235]   - Configuring RBAC rules ...
	I0910 17:50:19.648186   24502 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 17:50:19.663616   24502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 17:50:19.673774   24502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 17:50:19.677251   24502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 17:50:19.681178   24502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 17:50:19.685377   24502 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 17:50:19.969343   24502 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 17:50:20.404598   24502 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 17:50:20.970536   24502 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 17:50:20.971538   24502 kubeadm.go:310] 
	I0910 17:50:20.971612   24502 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 17:50:20.971623   24502 kubeadm.go:310] 
	I0910 17:50:20.971713   24502 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 17:50:20.971730   24502 kubeadm.go:310] 
	I0910 17:50:20.971761   24502 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 17:50:20.971815   24502 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 17:50:20.971880   24502 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 17:50:20.971896   24502 kubeadm.go:310] 
	I0910 17:50:20.971965   24502 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 17:50:20.971976   24502 kubeadm.go:310] 
	I0910 17:50:20.972044   24502 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 17:50:20.972053   24502 kubeadm.go:310] 
	I0910 17:50:20.972122   24502 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 17:50:20.972222   24502 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 17:50:20.972320   24502 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 17:50:20.972329   24502 kubeadm.go:310] 
	I0910 17:50:20.972433   24502 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 17:50:20.972538   24502 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 17:50:20.972547   24502 kubeadm.go:310] 
	I0910 17:50:20.972654   24502 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6pfcgw.55ya2kbllqozh475 \
	I0910 17:50:20.972761   24502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 17:50:20.972798   24502 kubeadm.go:310] 	--control-plane 
	I0910 17:50:20.972808   24502 kubeadm.go:310] 
	I0910 17:50:20.972898   24502 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 17:50:20.972906   24502 kubeadm.go:310] 
	I0910 17:50:20.972973   24502 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6pfcgw.55ya2kbllqozh475 \
	I0910 17:50:20.973100   24502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 17:50:20.974323   24502 kubeadm.go:310] W0910 17:50:09.535495     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:50:20.974612   24502 kubeadm.go:310] W0910 17:50:09.536421     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:50:20.974733   24502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 17:50:20.974759   24502 cni.go:84] Creating CNI manager for ""
	I0910 17:50:20.974771   24502 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0910 17:50:20.976400   24502 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0910 17:50:20.977678   24502 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0910 17:50:20.983151   24502 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0910 17:50:20.983170   24502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0910 17:50:21.001643   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0910 17:50:21.432325   24502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 17:50:21.432378   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:21.432448   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-558946 minikube.k8s.io/updated_at=2024_09_10T17_50_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-558946 minikube.k8s.io/primary=true
	I0910 17:50:21.597339   24502 ops.go:34] apiserver oom_adj: -16
	I0910 17:50:21.633387   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:22.134183   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:22.634091   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:23.133856   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:23.634143   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:24.134355   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:24.633929   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:25.134000   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:50:25.286471   24502 kubeadm.go:1113] duration metric: took 3.854135157s to wait for elevateKubeSystemPrivileges
	I0910 17:50:25.286512   24502 kubeadm.go:394] duration metric: took 15.976455198s to StartCluster
	I0910 17:50:25.286533   24502 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:25.286621   24502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:50:25.287196   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:25.287395   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 17:50:25.287394   24502 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:50:25.287416   24502 start.go:241] waiting for startup goroutines ...
	I0910 17:50:25.287432   24502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 17:50:25.287506   24502 addons.go:69] Setting storage-provisioner=true in profile "ha-558946"
	I0910 17:50:25.287513   24502 addons.go:69] Setting default-storageclass=true in profile "ha-558946"
	I0910 17:50:25.287535   24502 addons.go:234] Setting addon storage-provisioner=true in "ha-558946"
	I0910 17:50:25.287539   24502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-558946"
	I0910 17:50:25.287564   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:50:25.287609   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:25.287933   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.287950   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.287966   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.287983   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.302239   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44067
	I0910 17:50:25.302511   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0910 17:50:25.302794   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.302983   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.303343   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.303366   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.303566   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.303598   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.303668   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.303841   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:25.303925   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.304542   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.304597   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.306071   24502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:50:25.306415   24502 kapi.go:59] client config for ha-558946: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt", KeyFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key", CAFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2c360), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 17:50:25.306936   24502 cert_rotation.go:140] Starting client certificate rotation controller
	I0910 17:50:25.307202   24502 addons.go:234] Setting addon default-storageclass=true in "ha-558946"
	I0910 17:50:25.307244   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:50:25.307616   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.307662   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.320561   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I0910 17:50:25.321101   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.321629   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.321647   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.321972   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.322154   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:25.322221   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0910 17:50:25.322511   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.322888   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.322905   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.323233   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.323799   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:25.323836   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:25.323843   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:25.325907   24502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 17:50:25.327204   24502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:50:25.327220   24502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 17:50:25.327237   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:25.330683   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:25.331140   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:25.331167   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:25.331322   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:25.331543   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:25.331715   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:25.331871   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:25.339453   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0910 17:50:25.339845   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:25.340291   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:25.340316   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:25.340649   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:25.340847   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:25.342323   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:25.342514   24502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 17:50:25.342531   24502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 17:50:25.342547   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:25.345740   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:25.346233   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:25.346258   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:25.346409   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:25.346576   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:25.346739   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:25.346861   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:25.465300   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 17:50:25.495431   24502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:50:25.527897   24502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:50:26.004104   24502 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0910 17:50:26.004176   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.004194   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.004478   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.004495   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.004510   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.004519   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.004849   24502 main.go:141] libmachine: (ha-558946) DBG | Closing plugin on server side
	I0910 17:50:26.004863   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.004877   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.004938   24502 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0910 17:50:26.004955   24502 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0910 17:50:26.005044   24502 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0910 17:50:26.005054   24502 round_trippers.go:469] Request Headers:
	I0910 17:50:26.005064   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:50:26.005091   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:50:26.013193   24502 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 17:50:26.013699   24502 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0910 17:50:26.013712   24502 round_trippers.go:469] Request Headers:
	I0910 17:50:26.013722   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:50:26.013728   24502 round_trippers.go:473]     Content-Type: application/json
	I0910 17:50:26.013732   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:50:26.018699   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:50:26.018832   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.018849   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.019079   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.019100   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.019107   24502 main.go:141] libmachine: (ha-558946) DBG | Closing plugin on server side
	I0910 17:50:26.263137   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.263162   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.263455   24502 main.go:141] libmachine: (ha-558946) DBG | Closing plugin on server side
	I0910 17:50:26.263491   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.263500   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.263510   24502 main.go:141] libmachine: Making call to close driver server
	I0910 17:50:26.263520   24502 main.go:141] libmachine: (ha-558946) Calling .Close
	I0910 17:50:26.263883   24502 main.go:141] libmachine: (ha-558946) DBG | Closing plugin on server side
	I0910 17:50:26.263917   24502 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:50:26.263929   24502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:50:26.265477   24502 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0910 17:50:26.266747   24502 addons.go:510] duration metric: took 979.320996ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0910 17:50:26.266777   24502 start.go:246] waiting for cluster config update ...
	I0910 17:50:26.266788   24502 start.go:255] writing updated cluster config ...
	I0910 17:50:26.268434   24502 out.go:201] 
	I0910 17:50:26.269831   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:26.269896   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:50:26.271493   24502 out.go:177] * Starting "ha-558946-m02" control-plane node in "ha-558946" cluster
	I0910 17:50:26.273011   24502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:50:26.273029   24502 cache.go:56] Caching tarball of preloaded images
	I0910 17:50:26.273114   24502 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:50:26.273127   24502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:50:26.273183   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:50:26.273554   24502 start.go:360] acquireMachinesLock for ha-558946-m02: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:50:26.273591   24502 start.go:364] duration metric: took 20.548µs to acquireMachinesLock for "ha-558946-m02"
	I0910 17:50:26.273604   24502 start.go:93] Provisioning new machine with config: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:50:26.273665   24502 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0910 17:50:26.275158   24502 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 17:50:26.275224   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:26.275244   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:26.289864   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38285
	I0910 17:50:26.290242   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:26.290706   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:26.290723   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:26.291024   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:26.291213   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetMachineName
	I0910 17:50:26.291362   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:26.291524   24502 start.go:159] libmachine.API.Create for "ha-558946" (driver="kvm2")
	I0910 17:50:26.291547   24502 client.go:168] LocalClient.Create starting
	I0910 17:50:26.291578   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 17:50:26.291616   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:50:26.291636   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:50:26.291701   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 17:50:26.291727   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:50:26.291743   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:50:26.291766   24502 main.go:141] libmachine: Running pre-create checks...
	I0910 17:50:26.291785   24502 main.go:141] libmachine: (ha-558946-m02) Calling .PreCreateCheck
	I0910 17:50:26.291927   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetConfigRaw
	I0910 17:50:26.292349   24502 main.go:141] libmachine: Creating machine...
	I0910 17:50:26.292366   24502 main.go:141] libmachine: (ha-558946-m02) Calling .Create
	I0910 17:50:26.292491   24502 main.go:141] libmachine: (ha-558946-m02) Creating KVM machine...
	I0910 17:50:26.293620   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found existing default KVM network
	I0910 17:50:26.293738   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found existing private KVM network mk-ha-558946
	I0910 17:50:26.293883   24502 main.go:141] libmachine: (ha-558946-m02) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02 ...
	I0910 17:50:26.293908   24502 main.go:141] libmachine: (ha-558946-m02) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:50:26.293943   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:26.293859   24863 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:50:26.294030   24502 main.go:141] libmachine: (ha-558946-m02) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:50:26.519575   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:26.519434   24863 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa...
	I0910 17:50:26.605750   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:26.605615   24863 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/ha-558946-m02.rawdisk...
	I0910 17:50:26.605789   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Writing magic tar header
	I0910 17:50:26.605804   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Writing SSH key tar header
	I0910 17:50:26.605818   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:26.605761   24863 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02 ...
	I0910 17:50:26.605929   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02
	I0910 17:50:26.605948   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 17:50:26.605981   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02 (perms=drwx------)
	I0910 17:50:26.606012   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:50:26.606027   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:50:26.606040   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 17:50:26.606051   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 17:50:26.606062   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:50:26.606073   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 17:50:26.606091   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:50:26.606103   24502 main.go:141] libmachine: (ha-558946-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:50:26.606118   24502 main.go:141] libmachine: (ha-558946-m02) Creating domain...
	I0910 17:50:26.606130   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:50:26.606157   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Checking permissions on dir: /home
	I0910 17:50:26.606179   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Skipping /home - not owner
	I0910 17:50:26.606936   24502 main.go:141] libmachine: (ha-558946-m02) define libvirt domain using xml: 
	I0910 17:50:26.606957   24502 main.go:141] libmachine: (ha-558946-m02) <domain type='kvm'>
	I0910 17:50:26.606966   24502 main.go:141] libmachine: (ha-558946-m02)   <name>ha-558946-m02</name>
	I0910 17:50:26.606977   24502 main.go:141] libmachine: (ha-558946-m02)   <memory unit='MiB'>2200</memory>
	I0910 17:50:26.606988   24502 main.go:141] libmachine: (ha-558946-m02)   <vcpu>2</vcpu>
	I0910 17:50:26.606997   24502 main.go:141] libmachine: (ha-558946-m02)   <features>
	I0910 17:50:26.607005   24502 main.go:141] libmachine: (ha-558946-m02)     <acpi/>
	I0910 17:50:26.607014   24502 main.go:141] libmachine: (ha-558946-m02)     <apic/>
	I0910 17:50:26.607024   24502 main.go:141] libmachine: (ha-558946-m02)     <pae/>
	I0910 17:50:26.607033   24502 main.go:141] libmachine: (ha-558946-m02)     
	I0910 17:50:26.607041   24502 main.go:141] libmachine: (ha-558946-m02)   </features>
	I0910 17:50:26.607051   24502 main.go:141] libmachine: (ha-558946-m02)   <cpu mode='host-passthrough'>
	I0910 17:50:26.607070   24502 main.go:141] libmachine: (ha-558946-m02)   
	I0910 17:50:26.607090   24502 main.go:141] libmachine: (ha-558946-m02)   </cpu>
	I0910 17:50:26.607105   24502 main.go:141] libmachine: (ha-558946-m02)   <os>
	I0910 17:50:26.607113   24502 main.go:141] libmachine: (ha-558946-m02)     <type>hvm</type>
	I0910 17:50:26.607121   24502 main.go:141] libmachine: (ha-558946-m02)     <boot dev='cdrom'/>
	I0910 17:50:26.607127   24502 main.go:141] libmachine: (ha-558946-m02)     <boot dev='hd'/>
	I0910 17:50:26.607134   24502 main.go:141] libmachine: (ha-558946-m02)     <bootmenu enable='no'/>
	I0910 17:50:26.607138   24502 main.go:141] libmachine: (ha-558946-m02)   </os>
	I0910 17:50:26.607144   24502 main.go:141] libmachine: (ha-558946-m02)   <devices>
	I0910 17:50:26.607152   24502 main.go:141] libmachine: (ha-558946-m02)     <disk type='file' device='cdrom'>
	I0910 17:50:26.607160   24502 main.go:141] libmachine: (ha-558946-m02)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/boot2docker.iso'/>
	I0910 17:50:26.607172   24502 main.go:141] libmachine: (ha-558946-m02)       <target dev='hdc' bus='scsi'/>
	I0910 17:50:26.607182   24502 main.go:141] libmachine: (ha-558946-m02)       <readonly/>
	I0910 17:50:26.607192   24502 main.go:141] libmachine: (ha-558946-m02)     </disk>
	I0910 17:50:26.607204   24502 main.go:141] libmachine: (ha-558946-m02)     <disk type='file' device='disk'>
	I0910 17:50:26.607216   24502 main.go:141] libmachine: (ha-558946-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:50:26.607232   24502 main.go:141] libmachine: (ha-558946-m02)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/ha-558946-m02.rawdisk'/>
	I0910 17:50:26.607241   24502 main.go:141] libmachine: (ha-558946-m02)       <target dev='hda' bus='virtio'/>
	I0910 17:50:26.607273   24502 main.go:141] libmachine: (ha-558946-m02)     </disk>
	I0910 17:50:26.607294   24502 main.go:141] libmachine: (ha-558946-m02)     <interface type='network'>
	I0910 17:50:26.607309   24502 main.go:141] libmachine: (ha-558946-m02)       <source network='mk-ha-558946'/>
	I0910 17:50:26.607320   24502 main.go:141] libmachine: (ha-558946-m02)       <model type='virtio'/>
	I0910 17:50:26.607329   24502 main.go:141] libmachine: (ha-558946-m02)     </interface>
	I0910 17:50:26.607341   24502 main.go:141] libmachine: (ha-558946-m02)     <interface type='network'>
	I0910 17:50:26.607355   24502 main.go:141] libmachine: (ha-558946-m02)       <source network='default'/>
	I0910 17:50:26.607369   24502 main.go:141] libmachine: (ha-558946-m02)       <model type='virtio'/>
	I0910 17:50:26.607383   24502 main.go:141] libmachine: (ha-558946-m02)     </interface>
	I0910 17:50:26.607394   24502 main.go:141] libmachine: (ha-558946-m02)     <serial type='pty'>
	I0910 17:50:26.607406   24502 main.go:141] libmachine: (ha-558946-m02)       <target port='0'/>
	I0910 17:50:26.607414   24502 main.go:141] libmachine: (ha-558946-m02)     </serial>
	I0910 17:50:26.607424   24502 main.go:141] libmachine: (ha-558946-m02)     <console type='pty'>
	I0910 17:50:26.607431   24502 main.go:141] libmachine: (ha-558946-m02)       <target type='serial' port='0'/>
	I0910 17:50:26.607447   24502 main.go:141] libmachine: (ha-558946-m02)     </console>
	I0910 17:50:26.607462   24502 main.go:141] libmachine: (ha-558946-m02)     <rng model='virtio'>
	I0910 17:50:26.607473   24502 main.go:141] libmachine: (ha-558946-m02)       <backend model='random'>/dev/random</backend>
	I0910 17:50:26.607480   24502 main.go:141] libmachine: (ha-558946-m02)     </rng>
	I0910 17:50:26.607491   24502 main.go:141] libmachine: (ha-558946-m02)     
	I0910 17:50:26.607501   24502 main.go:141] libmachine: (ha-558946-m02)     
	I0910 17:50:26.607510   24502 main.go:141] libmachine: (ha-558946-m02)   </devices>
	I0910 17:50:26.607519   24502 main.go:141] libmachine: (ha-558946-m02) </domain>
	I0910 17:50:26.607529   24502 main.go:141] libmachine: (ha-558946-m02) 
	I0910 17:50:26.613978   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:54:64:6d in network default
	I0910 17:50:26.614547   24502 main.go:141] libmachine: (ha-558946-m02) Ensuring networks are active...
	I0910 17:50:26.614567   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:26.615166   24502 main.go:141] libmachine: (ha-558946-m02) Ensuring network default is active
	I0910 17:50:26.615504   24502 main.go:141] libmachine: (ha-558946-m02) Ensuring network mk-ha-558946 is active
	I0910 17:50:26.615852   24502 main.go:141] libmachine: (ha-558946-m02) Getting domain xml...
	I0910 17:50:26.616554   24502 main.go:141] libmachine: (ha-558946-m02) Creating domain...
	I0910 17:50:27.911789   24502 main.go:141] libmachine: (ha-558946-m02) Waiting to get IP...
	I0910 17:50:27.912693   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:27.913100   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:27.913133   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:27.913057   24863 retry.go:31] will retry after 265.359054ms: waiting for machine to come up
	I0910 17:50:28.180522   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:28.181044   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:28.181081   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:28.180999   24863 retry.go:31] will retry after 346.921747ms: waiting for machine to come up
	I0910 17:50:28.529416   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:28.529856   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:28.529881   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:28.529812   24863 retry.go:31] will retry after 484.868215ms: waiting for machine to come up
	I0910 17:50:29.016460   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:29.016814   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:29.016839   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:29.016763   24863 retry.go:31] will retry after 587.990914ms: waiting for machine to come up
	I0910 17:50:29.606433   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:29.606820   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:29.606848   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:29.606771   24863 retry.go:31] will retry after 651.119057ms: waiting for machine to come up
	I0910 17:50:30.259417   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:30.259760   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:30.259796   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:30.259735   24863 retry.go:31] will retry after 919.832632ms: waiting for machine to come up
	I0910 17:50:31.180652   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:31.181156   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:31.181178   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:31.181117   24863 retry.go:31] will retry after 1.100585606s: waiting for machine to come up
	I0910 17:50:32.282871   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:32.283254   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:32.283333   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:32.283248   24863 retry.go:31] will retry after 1.162968125s: waiting for machine to come up
	I0910 17:50:33.447357   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:33.447777   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:33.447805   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:33.447742   24863 retry.go:31] will retry after 1.773199242s: waiting for machine to come up
	I0910 17:50:35.222236   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:35.222808   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:35.222839   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:35.222783   24863 retry.go:31] will retry after 1.986522729s: waiting for machine to come up
	I0910 17:50:37.210834   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:37.211199   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:37.211226   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:37.211169   24863 retry.go:31] will retry after 1.791392731s: waiting for machine to come up
	I0910 17:50:39.005044   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:39.005472   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:39.005500   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:39.005423   24863 retry.go:31] will retry after 3.176867694s: waiting for machine to come up
	I0910 17:50:42.184204   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:42.184632   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find current IP address of domain ha-558946-m02 in network mk-ha-558946
	I0910 17:50:42.184662   24502 main.go:141] libmachine: (ha-558946-m02) DBG | I0910 17:50:42.184582   24863 retry.go:31] will retry after 4.493314199s: waiting for machine to come up
	I0910 17:50:46.679177   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.679745   24502 main.go:141] libmachine: (ha-558946-m02) Found IP for machine: 192.168.39.96
	I0910 17:50:46.679772   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has current primary IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.679778   24502 main.go:141] libmachine: (ha-558946-m02) Reserving static IP address...
	I0910 17:50:46.680151   24502 main.go:141] libmachine: (ha-558946-m02) DBG | unable to find host DHCP lease matching {name: "ha-558946-m02", mac: "52:54:00:68:52:22", ip: "192.168.39.96"} in network mk-ha-558946
	I0910 17:50:46.749349   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Getting to WaitForSSH function...
	I0910 17:50:46.749369   24502 main.go:141] libmachine: (ha-558946-m02) Reserved static IP address: 192.168.39.96
	I0910 17:50:46.749383   24502 main.go:141] libmachine: (ha-558946-m02) Waiting for SSH to be available...
	I0910 17:50:46.751784   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.752178   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:minikube Clientid:01:52:54:00:68:52:22}
	I0910 17:50:46.752199   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.752345   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Using SSH client type: external
	I0910 17:50:46.752371   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa (-rw-------)
	I0910 17:50:46.752401   24502 main.go:141] libmachine: (ha-558946-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:50:46.752414   24502 main.go:141] libmachine: (ha-558946-m02) DBG | About to run SSH command:
	I0910 17:50:46.752426   24502 main.go:141] libmachine: (ha-558946-m02) DBG | exit 0
	I0910 17:50:46.884926   24502 main.go:141] libmachine: (ha-558946-m02) DBG | SSH cmd err, output: <nil>: 
	I0910 17:50:46.885185   24502 main.go:141] libmachine: (ha-558946-m02) KVM machine creation complete!
	I0910 17:50:46.885469   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetConfigRaw
	I0910 17:50:46.886114   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:46.886290   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:46.886458   24502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:50:46.886475   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 17:50:46.887790   24502 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:50:46.887802   24502 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:50:46.887807   24502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:50:46.887812   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:46.890903   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.891302   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:46.891318   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:46.891468   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:46.891662   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:46.891899   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:46.892097   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:46.892272   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:46.892519   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:46.892537   24502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:50:47.004186   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:50:47.004204   24502 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:50:47.004211   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.006918   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.007246   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.007270   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.007496   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.007681   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.007842   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.007965   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.008122   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:47.008321   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:47.008333   24502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:50:47.121864   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:50:47.121923   24502 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:50:47.121932   24502 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:50:47.121943   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetMachineName
	I0910 17:50:47.122176   24502 buildroot.go:166] provisioning hostname "ha-558946-m02"
	I0910 17:50:47.122203   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetMachineName
	I0910 17:50:47.122389   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.124630   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.124980   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.125006   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.125152   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.125439   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.125637   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.125805   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.125965   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:47.126152   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:47.126170   24502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-558946-m02 && echo "ha-558946-m02" | sudo tee /etc/hostname
	I0910 17:50:47.252001   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946-m02
	
	I0910 17:50:47.252044   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.254689   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.255064   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.255094   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.255277   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.255463   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.255609   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.255703   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.255858   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:47.256042   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:47.256059   24502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-558946-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-558946-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-558946-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:50:47.379654   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:50:47.379678   24502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:50:47.379695   24502 buildroot.go:174] setting up certificates
	I0910 17:50:47.379705   24502 provision.go:84] configureAuth start
	I0910 17:50:47.379713   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetMachineName
	I0910 17:50:47.379953   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:50:47.382772   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.383194   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.383227   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.383377   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.385763   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.386073   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.386098   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.386212   24502 provision.go:143] copyHostCerts
	I0910 17:50:47.386253   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:50:47.386283   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 17:50:47.386292   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:50:47.386351   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:50:47.386418   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:50:47.386435   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 17:50:47.386442   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:50:47.386464   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:50:47.386507   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:50:47.386525   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 17:50:47.386531   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:50:47.386552   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:50:47.386597   24502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.ha-558946-m02 san=[127.0.0.1 192.168.39.96 ha-558946-m02 localhost minikube]
	I0910 17:50:47.656823   24502 provision.go:177] copyRemoteCerts
	I0910 17:50:47.656876   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:50:47.656897   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.659317   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.659629   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.659660   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.659804   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.660022   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.660151   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.660279   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 17:50:47.747894   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 17:50:47.747962   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:50:47.775174   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 17:50:47.775243   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:50:47.801718   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 17:50:47.801784   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 17:50:47.828072   24502 provision.go:87] duration metric: took 448.356458ms to configureAuth
	I0910 17:50:47.828094   24502 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:50:47.828297   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:47.828381   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:47.830678   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.831086   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:47.831133   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:47.831274   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:47.831460   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.831620   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:47.831763   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:47.831936   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:47.832077   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:47.832090   24502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 17:50:48.067038   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 17:50:48.067060   24502 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:50:48.067067   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetURL
	I0910 17:50:48.068206   24502 main.go:141] libmachine: (ha-558946-m02) DBG | Using libvirt version 6000000
	I0910 17:50:48.070686   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.071035   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.071059   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.071223   24502 main.go:141] libmachine: Docker is up and running!
	I0910 17:50:48.071233   24502 main.go:141] libmachine: Reticulating splines...
	I0910 17:50:48.071240   24502 client.go:171] duration metric: took 21.779684262s to LocalClient.Create
	I0910 17:50:48.071260   24502 start.go:167] duration metric: took 21.77974298s to libmachine.API.Create "ha-558946"
	I0910 17:50:48.071272   24502 start.go:293] postStartSetup for "ha-558946-m02" (driver="kvm2")
	I0910 17:50:48.071284   24502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:50:48.071305   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.071536   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:50:48.071562   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:48.073425   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.073731   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.073758   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.073922   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:48.074073   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.074226   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:48.074377   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 17:50:48.159138   24502 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:50:48.163448   24502 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:50:48.163468   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 17:50:48.163522   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 17:50:48.163591   24502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 17:50:48.163600   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 17:50:48.163677   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 17:50:48.172521   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:50:48.196168   24502 start.go:296] duration metric: took 124.877281ms for postStartSetup
	I0910 17:50:48.196213   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetConfigRaw
	I0910 17:50:48.196746   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:50:48.199300   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.199635   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.199660   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.199860   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:50:48.200025   24502 start.go:128] duration metric: took 21.926351928s to createHost
	I0910 17:50:48.200046   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:48.202478   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.202835   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.202856   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.203048   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:48.203280   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.203460   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.203641   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:48.203823   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:50:48.204006   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0910 17:50:48.204016   24502 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:50:48.317757   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725990648.288976030
	
	I0910 17:50:48.317778   24502 fix.go:216] guest clock: 1725990648.288976030
	I0910 17:50:48.317786   24502 fix.go:229] Guest: 2024-09-10 17:50:48.28897603 +0000 UTC Remote: 2024-09-10 17:50:48.200035363 +0000 UTC m=+69.145864566 (delta=88.940667ms)
	I0910 17:50:48.317799   24502 fix.go:200] guest clock delta is within tolerance: 88.940667ms
	I0910 17:50:48.317803   24502 start.go:83] releasing machines lock for "ha-558946-m02", held for 22.04420652s
	I0910 17:50:48.317820   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.318049   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:50:48.320388   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.320723   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.320750   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.322540   24502 out.go:177] * Found network options:
	I0910 17:50:48.323634   24502 out.go:177]   - NO_PROXY=192.168.39.109
	W0910 17:50:48.324768   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 17:50:48.324796   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.325356   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.325504   24502 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 17:50:48.325571   24502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:50:48.325614   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	W0910 17:50:48.325695   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 17:50:48.325752   24502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 17:50:48.325775   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 17:50:48.328299   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.328326   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.328637   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.328672   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:48.328698   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.328713   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:48.329013   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:48.329044   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 17:50:48.329198   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.329207   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 17:50:48.329360   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:48.329422   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 17:50:48.329496   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 17:50:48.329546   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 17:50:48.565787   24502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:50:48.571778   24502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:50:48.571827   24502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:50:48.587592   24502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:50:48.587613   24502 start.go:495] detecting cgroup driver to use...
	I0910 17:50:48.587667   24502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:50:48.603346   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:50:48.616332   24502 docker.go:217] disabling cri-docker service (if available) ...
	I0910 17:50:48.616374   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 17:50:48.629056   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 17:50:48.641532   24502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 17:50:48.759370   24502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 17:50:48.897526   24502 docker.go:233] disabling docker service ...
	I0910 17:50:48.897595   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 17:50:48.911400   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 17:50:48.924332   24502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 17:50:49.055513   24502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 17:50:49.183688   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 17:50:49.197405   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:50:49.215069   24502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 17:50:49.215140   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.225078   24502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 17:50:49.225132   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.234974   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.244634   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.254338   24502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:50:49.264276   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.273976   24502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.290130   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:50:49.299886   24502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:50:49.308688   24502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 17:50:49.308762   24502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 17:50:49.320759   24502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:50:49.329426   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:50:49.438096   24502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 17:50:49.528595   24502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 17:50:49.528657   24502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 17:50:49.533302   24502 start.go:563] Will wait 60s for crictl version
	I0910 17:50:49.533353   24502 ssh_runner.go:195] Run: which crictl
	I0910 17:50:49.537491   24502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:50:49.578565   24502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 17:50:49.578640   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:50:49.610720   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:50:49.640788   24502 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 17:50:49.642146   24502 out.go:177]   - env NO_PROXY=192.168.39.109
	I0910 17:50:49.643245   24502 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 17:50:49.645873   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:49.646268   24502 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:41 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 17:50:49.646293   24502 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 17:50:49.646449   24502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:50:49.650924   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:50:49.664779   24502 mustload.go:65] Loading cluster: ha-558946
	I0910 17:50:49.664939   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:50:49.665246   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:49.665273   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:49.679865   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0910 17:50:49.680243   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:49.680688   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:49.680705   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:49.680978   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:49.681182   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:50:49.682670   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:50:49.682929   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:49.682959   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:49.698083   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33255
	I0910 17:50:49.698514   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:49.698944   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:49.698957   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:49.699229   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:49.699365   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:49.699545   24502 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946 for IP: 192.168.39.96
	I0910 17:50:49.699559   24502 certs.go:194] generating shared ca certs ...
	I0910 17:50:49.699576   24502 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:49.699683   24502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 17:50:49.699717   24502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 17:50:49.699726   24502 certs.go:256] generating profile certs ...
	I0910 17:50:49.699785   24502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key
	I0910 17:50:49.699808   24502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.01327bff
	I0910 17:50:49.699822   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.01327bff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109 192.168.39.96 192.168.39.254]
	I0910 17:50:50.007327   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.01327bff ...
	I0910 17:50:50.007355   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.01327bff: {Name:mkfa381ae2fc0a445f7d11499df3d390f9773ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:50.007535   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.01327bff ...
	I0910 17:50:50.007552   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.01327bff: {Name:mk1480193644e02512eef0392dfef1eaac9eed03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:50:50.007652   24502 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.01327bff -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt
	I0910 17:50:50.007778   24502 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.01327bff -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key
	I0910 17:50:50.007900   24502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key
	I0910 17:50:50.007914   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 17:50:50.007925   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 17:50:50.007936   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 17:50:50.007949   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 17:50:50.007961   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 17:50:50.007973   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 17:50:50.007985   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 17:50:50.007997   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 17:50:50.008040   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 17:50:50.008068   24502 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 17:50:50.008077   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 17:50:50.008097   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:50:50.008118   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:50:50.008138   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 17:50:50.008174   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:50:50.008198   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:50.008212   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 17:50:50.008224   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 17:50:50.008265   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:50.011368   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:50.011776   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:50.011803   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:50.012009   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:50.012152   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:50.012305   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:50.012394   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:50.089358   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0910 17:50:50.095687   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0910 17:50:50.109787   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0910 17:50:50.114258   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0910 17:50:50.126575   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0910 17:50:50.130919   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0910 17:50:50.142515   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0910 17:50:50.146718   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0910 17:50:50.156242   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0910 17:50:50.160610   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0910 17:50:50.169811   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0910 17:50:50.173872   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0910 17:50:50.185576   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:50:50.214190   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:50:50.237485   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:50:50.260239   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 17:50:50.283526   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0910 17:50:50.306686   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 17:50:50.330388   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:50:50.356195   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:50:50.378213   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:50:50.401337   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 17:50:50.423979   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 17:50:50.446336   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0910 17:50:50.464222   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0910 17:50:50.481182   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0910 17:50:50.497640   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0910 17:50:50.513639   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0910 17:50:50.530054   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0910 17:50:50.545355   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0910 17:50:50.560571   24502 ssh_runner.go:195] Run: openssl version
	I0910 17:50:50.565852   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 17:50:50.576017   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 17:50:50.580131   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 17:50:50.580174   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 17:50:50.585765   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 17:50:50.596008   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 17:50:50.606192   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 17:50:50.610245   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 17:50:50.610294   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 17:50:50.615708   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 17:50:50.625971   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:50:50.636249   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:50.640536   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:50.640581   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:50:50.645901   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:50:50.656307   24502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:50:50.660154   24502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:50:50.660201   24502 kubeadm.go:934] updating node {m02 192.168.39.96 8443 v1.31.0 crio true true} ...
	I0910 17:50:50.660284   24502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-558946-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:50:50.660314   24502 kube-vip.go:115] generating kube-vip config ...
	I0910 17:50:50.660349   24502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 17:50:50.676111   24502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 17:50:50.676175   24502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 17:50:50.676226   24502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:50:50.691724   24502 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 17:50:50.691770   24502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 17:50:50.702172   24502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0910 17:50:50.702192   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 17:50:50.702200   24502 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0910 17:50:50.702210   24502 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0910 17:50:50.702239   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 17:50:50.706642   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 17:50:50.706666   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 17:50:51.267193   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 17:50:51.267267   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 17:50:51.272358   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 17:50:51.272392   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 17:50:51.553190   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:50:51.567070   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 17:50:51.567155   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 17:50:51.572688   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 17:50:51.572717   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 17:50:51.864476   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0910 17:50:51.873773   24502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0910 17:50:51.890413   24502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:50:51.906916   24502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 17:50:51.923468   24502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0910 17:50:51.927336   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:50:51.939439   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:50:52.080482   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:50:52.098304   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:50:52.098773   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:50:52.098828   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:50:52.113446   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46751
	I0910 17:50:52.113848   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:50:52.114302   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:50:52.114316   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:50:52.114605   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:50:52.114763   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:50:52.114925   24502 start.go:317] joinCluster: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:50:52.115031   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 17:50:52.115054   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:50:52.118056   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:52.118510   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:50:52.118536   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:50:52.118675   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:50:52.118848   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:50:52.118987   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:50:52.119153   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:50:52.267488   24502 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:50:52.267535   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token efs0vc.gxraj55oklb55bap --discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-558946-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443"
	I0910 17:51:13.456561   24502 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token efs0vc.gxraj55oklb55bap --discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-558946-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443": (21.188985738s)
	I0910 17:51:13.456620   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 17:51:13.939315   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-558946-m02 minikube.k8s.io/updated_at=2024_09_10T17_51_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-558946 minikube.k8s.io/primary=false
	I0910 17:51:14.083936   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-558946-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0910 17:51:14.222359   24502 start.go:319] duration metric: took 22.107427814s to joinCluster
	I0910 17:51:14.222492   24502 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:51:14.222804   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:51:14.224007   24502 out.go:177] * Verifying Kubernetes components...
	I0910 17:51:14.225334   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:51:14.506720   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:51:14.561822   24502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:51:14.562140   24502 kapi.go:59] client config for ha-558946: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt", KeyFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key", CAFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2c360), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0910 17:51:14.562238   24502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.109:8443
	I0910 17:51:14.562514   24502 node_ready.go:35] waiting up to 6m0s for node "ha-558946-m02" to be "Ready" ...
	I0910 17:51:14.562681   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:14.562692   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:14.562699   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:14.562703   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:14.573177   24502 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0910 17:51:15.062865   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:15.062883   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:15.062891   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:15.062894   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:15.066799   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:15.562701   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:15.562721   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:15.562728   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:15.562733   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:15.567488   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:16.062971   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:16.062990   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:16.062998   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:16.063002   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:16.070102   24502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 17:51:16.563715   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:16.563735   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:16.563746   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:16.563751   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:16.567011   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:16.571018   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:17.063398   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:17.063424   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:17.063435   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:17.063442   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:17.066914   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:17.563293   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:17.563313   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:17.563321   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:17.563324   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:17.566577   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:18.063498   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:18.063518   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:18.063525   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:18.063529   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:18.067084   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:18.563143   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:18.563169   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:18.563177   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:18.563182   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:18.567406   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:19.062809   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:19.062831   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:19.062841   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:19.062848   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:19.066248   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:19.066989   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:19.562731   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:19.562749   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:19.562757   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:19.562760   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:19.566574   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:20.063451   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:20.063476   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:20.063486   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:20.063496   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:20.066873   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:20.562908   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:20.562931   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:20.562942   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:20.562947   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:20.565923   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:21.062891   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:21.062924   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:21.062936   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:21.062944   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:21.065940   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:21.563117   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:21.563137   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:21.563147   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:21.563152   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:21.566136   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:21.566568   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:22.062927   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:22.062948   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:22.062955   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:22.062959   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:22.066284   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:22.563595   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:22.563617   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:22.563624   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:22.563631   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:22.566858   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:23.062809   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:23.062829   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:23.062837   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:23.062842   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:23.066208   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:23.563196   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:23.563221   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:23.563232   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:23.563238   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:23.566084   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:23.566655   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:24.062985   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:24.063023   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:24.063030   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:24.063034   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:24.065854   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:24.563378   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:24.563395   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:24.563403   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:24.563406   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:24.566311   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:25.063016   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:25.063038   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:25.063046   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:25.063051   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:25.066024   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:25.563229   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:25.563249   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:25.563258   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:25.563261   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:25.566574   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:25.567168   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:26.063741   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:26.063760   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:26.063767   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:26.063771   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:26.066805   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:26.563428   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:26.563449   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:26.563456   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:26.563459   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:26.567621   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:27.062709   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:27.062731   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:27.062739   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:27.062744   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:27.066061   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:27.563663   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:27.563687   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:27.563695   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:27.563699   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:27.567286   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:27.567776   24502 node_ready.go:53] node "ha-558946-m02" has status "Ready":"False"
	I0910 17:51:28.063156   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:28.063178   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.063185   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.063192   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.067527   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:28.563482   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:28.563507   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.563516   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.563519   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.566367   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.566948   24502 node_ready.go:49] node "ha-558946-m02" has status "Ready":"True"
	I0910 17:51:28.566980   24502 node_ready.go:38] duration metric: took 14.004409051s for node "ha-558946-m02" to be "Ready" ...
	I0910 17:51:28.566992   24502 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:51:28.567082   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:28.567092   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.567101   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.567107   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.571241   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:28.579735   24502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.579820   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-5pv7s
	I0910 17:51:28.579831   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.579841   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.579849   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.583079   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:28.585580   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.585595   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.585604   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.585612   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.587877   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.588600   24502 pod_ready.go:93] pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.588617   24502 pod_ready.go:82] duration metric: took 8.861813ms for pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.588625   24502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.588681   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fmcmc
	I0910 17:51:28.588691   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.588701   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.588709   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.591647   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.592207   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.592219   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.592225   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.592228   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.595005   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.595955   24502 pod_ready.go:93] pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.595978   24502 pod_ready.go:82] duration metric: took 7.345951ms for pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.595989   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.596049   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946
	I0910 17:51:28.596062   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.596072   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.596081   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.598101   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.598684   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.598698   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.598703   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.598710   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.600798   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.601421   24502 pod_ready.go:93] pod "etcd-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.601444   24502 pod_ready.go:82] duration metric: took 5.442437ms for pod "etcd-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.601454   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.601507   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946-m02
	I0910 17:51:28.601519   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.601529   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.601537   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.603535   24502 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0910 17:51:28.604125   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:28.604138   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.604145   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.604149   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.606230   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:28.606781   24502 pod_ready.go:93] pod "etcd-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.606802   24502 pod_ready.go:82] duration metric: took 5.339798ms for pod "etcd-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.606819   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.764183   24502 request.go:632] Waited for 157.311635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946
	I0910 17:51:28.764266   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946
	I0910 17:51:28.764272   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.764281   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.764285   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.767572   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:28.964543   24502 request.go:632] Waited for 196.357743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.964593   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:28.964598   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:28.964605   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:28.964608   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:28.967663   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:28.968220   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:28.968241   24502 pod_ready.go:82] duration metric: took 361.411821ms for pod "kube-apiserver-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:28.968253   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.164556   24502 request.go:632] Waited for 196.24116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m02
	I0910 17:51:29.164621   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m02
	I0910 17:51:29.164630   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.164638   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.164645   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.167001   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:29.364067   24502 request.go:632] Waited for 196.374164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:29.364119   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:29.364124   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.364130   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.364134   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.367203   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:29.367698   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:29.367715   24502 pod_ready.go:82] duration metric: took 399.454798ms for pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.367723   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.563838   24502 request.go:632] Waited for 196.057022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946
	I0910 17:51:29.563911   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946
	I0910 17:51:29.563917   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.563926   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.563930   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.567450   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:29.763711   24502 request.go:632] Waited for 195.646381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:29.763760   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:29.763765   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.763772   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.763775   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.766513   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:29.767086   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:29.767110   24502 pod_ready.go:82] duration metric: took 399.379451ms for pod "kube-controller-manager-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.767125   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:29.964138   24502 request.go:632] Waited for 196.946066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m02
	I0910 17:51:29.964197   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m02
	I0910 17:51:29.964213   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:29.964223   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:29.964229   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:29.967201   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:30.164383   24502 request.go:632] Waited for 196.414667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:30.164460   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:30.164467   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.164475   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.164484   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.167334   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:30.167763   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:30.167782   24502 pod_ready.go:82] duration metric: took 400.648369ms for pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.167792   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gjqzx" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.363917   24502 request.go:632] Waited for 196.065663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjqzx
	I0910 17:51:30.364004   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjqzx
	I0910 17:51:30.364014   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.364028   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.364037   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.366865   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:30.564369   24502 request.go:632] Waited for 196.350473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:30.564423   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:30.564429   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.564439   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.564444   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.572510   24502 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0910 17:51:30.572959   24502 pod_ready.go:93] pod "kube-proxy-gjqzx" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:30.572976   24502 pod_ready.go:82] duration metric: took 405.17516ms for pod "kube-proxy-gjqzx" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.572988   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xggtm" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.764121   24502 request.go:632] Waited for 191.070402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xggtm
	I0910 17:51:30.764196   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xggtm
	I0910 17:51:30.764204   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.764211   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.764219   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.767402   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:30.964422   24502 request.go:632] Waited for 196.366316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:30.964475   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:30.964480   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:30.964489   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:30.964496   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:30.967699   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:30.968152   24502 pod_ready.go:93] pod "kube-proxy-xggtm" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:30.968167   24502 pod_ready.go:82] duration metric: took 395.172639ms for pod "kube-proxy-xggtm" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:30.968175   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:31.164293   24502 request.go:632] Waited for 196.0607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946
	I0910 17:51:31.164366   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946
	I0910 17:51:31.164375   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.164382   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.164388   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.167528   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:31.364452   24502 request.go:632] Waited for 196.327135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:31.364538   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:51:31.364549   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.364560   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.364569   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.367389   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:31.367921   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:31.367938   24502 pod_ready.go:82] duration metric: took 399.757768ms for pod "kube-scheduler-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:31.367948   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:31.564114   24502 request.go:632] Waited for 196.105026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m02
	I0910 17:51:31.564170   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m02
	I0910 17:51:31.564176   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.564189   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.564207   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.567246   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:31.764315   24502 request.go:632] Waited for 196.351153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:31.764372   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:51:31.764377   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.764385   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.764389   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.767357   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:51:31.767711   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:51:31.767726   24502 pod_ready.go:82] duration metric: took 399.772816ms for pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:51:31.767736   24502 pod_ready.go:39] duration metric: took 3.200729041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:51:31.767758   24502 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:51:31.767808   24502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:51:31.784194   24502 api_server.go:72] duration metric: took 17.561653367s to wait for apiserver process to appear ...
	I0910 17:51:31.784214   24502 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:51:31.784234   24502 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0910 17:51:31.789969   24502 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I0910 17:51:31.790039   24502 round_trippers.go:463] GET https://192.168.39.109:8443/version
	I0910 17:51:31.790050   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.790061   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.790070   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.790851   24502 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0910 17:51:31.790982   24502 api_server.go:141] control plane version: v1.31.0
	I0910 17:51:31.791003   24502 api_server.go:131] duration metric: took 6.782084ms to wait for apiserver health ...
	I0910 17:51:31.791020   24502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:51:31.964413   24502 request.go:632] Waited for 173.326677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:31.964477   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:31.964482   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:31.964489   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:31.964506   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:31.969000   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:51:31.974052   24502 system_pods.go:59] 17 kube-system pods found
	I0910 17:51:31.974080   24502 system_pods.go:61] "coredns-6f6b679f8f-5pv7s" [e75ceddc-7576-45f6-8b80-2071bc7fbef8] Running
	I0910 17:51:31.974084   24502 system_pods.go:61] "coredns-6f6b679f8f-fmcmc" [0d79d296-3ee7-4b7b-8869-e45465da70ff] Running
	I0910 17:51:31.974089   24502 system_pods.go:61] "etcd-ha-558946" [d99a9237-7866-40f1-95d6-c6488183479e] Running
	I0910 17:51:31.974092   24502 system_pods.go:61] "etcd-ha-558946-m02" [d22427c5-1548-4bd2-b1c1-5a6a4353077a] Running
	I0910 17:51:31.974096   24502 system_pods.go:61] "kindnet-n8n67" [019cf933-bf89-485d-a837-bf8bbedbc0df] Running
	I0910 17:51:31.974100   24502 system_pods.go:61] "kindnet-sfr7m" [31ccb06a-6f76-4a18-894c-707993f766e5] Running
	I0910 17:51:31.974103   24502 system_pods.go:61] "kube-apiserver-ha-558946" [74003dbd-903b-48de-b85f-973654d0d58e] Running
	I0910 17:51:31.974106   24502 system_pods.go:61] "kube-apiserver-ha-558946-m02" [9136cd3a-a68e-4167-808d-61b33978cf45] Running
	I0910 17:51:31.974110   24502 system_pods.go:61] "kube-controller-manager-ha-558946" [82453b26-31b3-4c6e-8e37-26eb141923fc] Running
	I0910 17:51:31.974113   24502 system_pods.go:61] "kube-controller-manager-ha-558946-m02" [d658071a-4335-4933-88c8-4d2cfccb40df] Running
	I0910 17:51:31.974116   24502 system_pods.go:61] "kube-proxy-gjqzx" [35a3fe57-a2d6-4134-8205-ce5c8d09b707] Running
	I0910 17:51:31.974120   24502 system_pods.go:61] "kube-proxy-xggtm" [347371e4-83b7-474c-8924-d33c479d736a] Running
	I0910 17:51:31.974123   24502 system_pods.go:61] "kube-scheduler-ha-558946" [e99973ac-5718-4769-99e3-282c3c25b8f8] Running
	I0910 17:51:31.974126   24502 system_pods.go:61] "kube-scheduler-ha-558946-m02" [6c57c232-f86e-417c-b3a6-867b3ed443bf] Running
	I0910 17:51:31.974129   24502 system_pods.go:61] "kube-vip-ha-558946" [810f85ef-6900-456e-877e-095d38286613] Running
	I0910 17:51:31.974132   24502 system_pods.go:61] "kube-vip-ha-558946-m02" [59850a02-4ce3-47dc-a250-f18c0fd9533c] Running
	I0910 17:51:31.974134   24502 system_pods.go:61] "storage-provisioner" [baf5cd7e-5266-4d55-bd6c-459257baa463] Running
	I0910 17:51:31.974141   24502 system_pods.go:74] duration metric: took 183.113705ms to wait for pod list to return data ...
	I0910 17:51:31.974149   24502 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:51:32.164305   24502 request.go:632] Waited for 190.09264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0910 17:51:32.164357   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0910 17:51:32.164362   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:32.164369   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:32.164373   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:32.168172   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:32.168453   24502 default_sa.go:45] found service account: "default"
	I0910 17:51:32.168474   24502 default_sa.go:55] duration metric: took 194.318196ms for default service account to be created ...
	I0910 17:51:32.168484   24502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:51:32.363890   24502 request.go:632] Waited for 195.339749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:32.363950   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:51:32.363964   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:32.363976   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:32.363985   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:32.367968   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:32.372851   24502 system_pods.go:86] 17 kube-system pods found
	I0910 17:51:32.372873   24502 system_pods.go:89] "coredns-6f6b679f8f-5pv7s" [e75ceddc-7576-45f6-8b80-2071bc7fbef8] Running
	I0910 17:51:32.372878   24502 system_pods.go:89] "coredns-6f6b679f8f-fmcmc" [0d79d296-3ee7-4b7b-8869-e45465da70ff] Running
	I0910 17:51:32.372883   24502 system_pods.go:89] "etcd-ha-558946" [d99a9237-7866-40f1-95d6-c6488183479e] Running
	I0910 17:51:32.372887   24502 system_pods.go:89] "etcd-ha-558946-m02" [d22427c5-1548-4bd2-b1c1-5a6a4353077a] Running
	I0910 17:51:32.372891   24502 system_pods.go:89] "kindnet-n8n67" [019cf933-bf89-485d-a837-bf8bbedbc0df] Running
	I0910 17:51:32.372894   24502 system_pods.go:89] "kindnet-sfr7m" [31ccb06a-6f76-4a18-894c-707993f766e5] Running
	I0910 17:51:32.372898   24502 system_pods.go:89] "kube-apiserver-ha-558946" [74003dbd-903b-48de-b85f-973654d0d58e] Running
	I0910 17:51:32.372901   24502 system_pods.go:89] "kube-apiserver-ha-558946-m02" [9136cd3a-a68e-4167-808d-61b33978cf45] Running
	I0910 17:51:32.372905   24502 system_pods.go:89] "kube-controller-manager-ha-558946" [82453b26-31b3-4c6e-8e37-26eb141923fc] Running
	I0910 17:51:32.372908   24502 system_pods.go:89] "kube-controller-manager-ha-558946-m02" [d658071a-4335-4933-88c8-4d2cfccb40df] Running
	I0910 17:51:32.372911   24502 system_pods.go:89] "kube-proxy-gjqzx" [35a3fe57-a2d6-4134-8205-ce5c8d09b707] Running
	I0910 17:51:32.372915   24502 system_pods.go:89] "kube-proxy-xggtm" [347371e4-83b7-474c-8924-d33c479d736a] Running
	I0910 17:51:32.372918   24502 system_pods.go:89] "kube-scheduler-ha-558946" [e99973ac-5718-4769-99e3-282c3c25b8f8] Running
	I0910 17:51:32.372921   24502 system_pods.go:89] "kube-scheduler-ha-558946-m02" [6c57c232-f86e-417c-b3a6-867b3ed443bf] Running
	I0910 17:51:32.372926   24502 system_pods.go:89] "kube-vip-ha-558946" [810f85ef-6900-456e-877e-095d38286613] Running
	I0910 17:51:32.372932   24502 system_pods.go:89] "kube-vip-ha-558946-m02" [59850a02-4ce3-47dc-a250-f18c0fd9533c] Running
	I0910 17:51:32.372934   24502 system_pods.go:89] "storage-provisioner" [baf5cd7e-5266-4d55-bd6c-459257baa463] Running
	I0910 17:51:32.372940   24502 system_pods.go:126] duration metric: took 204.447248ms to wait for k8s-apps to be running ...
	I0910 17:51:32.372948   24502 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:51:32.372987   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:51:32.387671   24502 system_svc.go:56] duration metric: took 14.714456ms WaitForService to wait for kubelet
	I0910 17:51:32.387696   24502 kubeadm.go:582] duration metric: took 18.165156927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:51:32.387732   24502 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:51:32.564145   24502 request.go:632] Waited for 176.338842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes
	I0910 17:51:32.564204   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes
	I0910 17:51:32.564212   24502 round_trippers.go:469] Request Headers:
	I0910 17:51:32.564220   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:51:32.564229   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:51:32.567596   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:51:32.568287   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:51:32.568308   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:51:32.568338   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:51:32.568348   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:51:32.568355   24502 node_conditions.go:105] duration metric: took 180.614589ms to run NodePressure ...
	I0910 17:51:32.568373   24502 start.go:241] waiting for startup goroutines ...
	I0910 17:51:32.568418   24502 start.go:255] writing updated cluster config ...
	I0910 17:51:32.570407   24502 out.go:201] 
	I0910 17:51:32.571730   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:51:32.571863   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:51:32.573371   24502 out.go:177] * Starting "ha-558946-m03" control-plane node in "ha-558946" cluster
	I0910 17:51:32.574294   24502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:51:32.574313   24502 cache.go:56] Caching tarball of preloaded images
	I0910 17:51:32.574417   24502 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:51:32.574429   24502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:51:32.574521   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:51:32.574746   24502 start.go:360] acquireMachinesLock for ha-558946-m03: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:51:32.574800   24502 start.go:364] duration metric: took 35.284µs to acquireMachinesLock for "ha-558946-m03"
	I0910 17:51:32.574829   24502 start.go:93] Provisioning new machine with config: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:51:32.574942   24502 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0910 17:51:32.576218   24502 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 17:51:32.576317   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:51:32.576351   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:51:32.591822   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0910 17:51:32.592230   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:51:32.592797   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:51:32.592826   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:51:32.593167   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:51:32.593344   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetMachineName
	I0910 17:51:32.593482   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:32.593626   24502 start.go:159] libmachine.API.Create for "ha-558946" (driver="kvm2")
	I0910 17:51:32.593654   24502 client.go:168] LocalClient.Create starting
	I0910 17:51:32.593689   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 17:51:32.593718   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:51:32.593733   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:51:32.593781   24502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 17:51:32.593799   24502 main.go:141] libmachine: Decoding PEM data...
	I0910 17:51:32.593809   24502 main.go:141] libmachine: Parsing certificate...
	I0910 17:51:32.593828   24502 main.go:141] libmachine: Running pre-create checks...
	I0910 17:51:32.593836   24502 main.go:141] libmachine: (ha-558946-m03) Calling .PreCreateCheck
	I0910 17:51:32.593992   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetConfigRaw
	I0910 17:51:32.594338   24502 main.go:141] libmachine: Creating machine...
	I0910 17:51:32.594353   24502 main.go:141] libmachine: (ha-558946-m03) Calling .Create
	I0910 17:51:32.594486   24502 main.go:141] libmachine: (ha-558946-m03) Creating KVM machine...
	I0910 17:51:32.595809   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found existing default KVM network
	I0910 17:51:32.595945   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found existing private KVM network mk-ha-558946
	I0910 17:51:32.596089   24502 main.go:141] libmachine: (ha-558946-m03) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03 ...
	I0910 17:51:32.596114   24502 main.go:141] libmachine: (ha-558946-m03) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:51:32.596186   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:32.596074   25238 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:51:32.596285   24502 main.go:141] libmachine: (ha-558946-m03) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:51:32.820086   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:32.819982   25238 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa...
	I0910 17:51:32.939951   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:32.939817   25238 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/ha-558946-m03.rawdisk...
	I0910 17:51:32.939979   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Writing magic tar header
	I0910 17:51:32.939989   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Writing SSH key tar header
	I0910 17:51:32.939998   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:32.939949   25238 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03 ...
	I0910 17:51:32.940114   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03
	I0910 17:51:32.940145   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03 (perms=drwx------)
	I0910 17:51:32.940160   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 17:51:32.940179   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:51:32.940196   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 17:51:32.940204   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 17:51:32.940216   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:51:32.940241   24502 main.go:141] libmachine: (ha-558946-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:51:32.940256   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:51:32.940267   24502 main.go:141] libmachine: (ha-558946-m03) Creating domain...
	I0910 17:51:32.940285   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 17:51:32.940299   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:51:32.940315   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:51:32.940327   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Checking permissions on dir: /home
	I0910 17:51:32.940340   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Skipping /home - not owner
	I0910 17:51:32.941201   24502 main.go:141] libmachine: (ha-558946-m03) define libvirt domain using xml: 
	I0910 17:51:32.941218   24502 main.go:141] libmachine: (ha-558946-m03) <domain type='kvm'>
	I0910 17:51:32.941225   24502 main.go:141] libmachine: (ha-558946-m03)   <name>ha-558946-m03</name>
	I0910 17:51:32.941230   24502 main.go:141] libmachine: (ha-558946-m03)   <memory unit='MiB'>2200</memory>
	I0910 17:51:32.941235   24502 main.go:141] libmachine: (ha-558946-m03)   <vcpu>2</vcpu>
	I0910 17:51:32.941243   24502 main.go:141] libmachine: (ha-558946-m03)   <features>
	I0910 17:51:32.941248   24502 main.go:141] libmachine: (ha-558946-m03)     <acpi/>
	I0910 17:51:32.941253   24502 main.go:141] libmachine: (ha-558946-m03)     <apic/>
	I0910 17:51:32.941257   24502 main.go:141] libmachine: (ha-558946-m03)     <pae/>
	I0910 17:51:32.941262   24502 main.go:141] libmachine: (ha-558946-m03)     
	I0910 17:51:32.941267   24502 main.go:141] libmachine: (ha-558946-m03)   </features>
	I0910 17:51:32.941272   24502 main.go:141] libmachine: (ha-558946-m03)   <cpu mode='host-passthrough'>
	I0910 17:51:32.941277   24502 main.go:141] libmachine: (ha-558946-m03)   
	I0910 17:51:32.941281   24502 main.go:141] libmachine: (ha-558946-m03)   </cpu>
	I0910 17:51:32.941287   24502 main.go:141] libmachine: (ha-558946-m03)   <os>
	I0910 17:51:32.941293   24502 main.go:141] libmachine: (ha-558946-m03)     <type>hvm</type>
	I0910 17:51:32.941298   24502 main.go:141] libmachine: (ha-558946-m03)     <boot dev='cdrom'/>
	I0910 17:51:32.941309   24502 main.go:141] libmachine: (ha-558946-m03)     <boot dev='hd'/>
	I0910 17:51:32.941314   24502 main.go:141] libmachine: (ha-558946-m03)     <bootmenu enable='no'/>
	I0910 17:51:32.941323   24502 main.go:141] libmachine: (ha-558946-m03)   </os>
	I0910 17:51:32.941329   24502 main.go:141] libmachine: (ha-558946-m03)   <devices>
	I0910 17:51:32.941341   24502 main.go:141] libmachine: (ha-558946-m03)     <disk type='file' device='cdrom'>
	I0910 17:51:32.941426   24502 main.go:141] libmachine: (ha-558946-m03)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/boot2docker.iso'/>
	I0910 17:51:32.941455   24502 main.go:141] libmachine: (ha-558946-m03)       <target dev='hdc' bus='scsi'/>
	I0910 17:51:32.941471   24502 main.go:141] libmachine: (ha-558946-m03)       <readonly/>
	I0910 17:51:32.941484   24502 main.go:141] libmachine: (ha-558946-m03)     </disk>
	I0910 17:51:32.941495   24502 main.go:141] libmachine: (ha-558946-m03)     <disk type='file' device='disk'>
	I0910 17:51:32.941506   24502 main.go:141] libmachine: (ha-558946-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:51:32.941521   24502 main.go:141] libmachine: (ha-558946-m03)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/ha-558946-m03.rawdisk'/>
	I0910 17:51:32.941533   24502 main.go:141] libmachine: (ha-558946-m03)       <target dev='hda' bus='virtio'/>
	I0910 17:51:32.941543   24502 main.go:141] libmachine: (ha-558946-m03)     </disk>
	I0910 17:51:32.941558   24502 main.go:141] libmachine: (ha-558946-m03)     <interface type='network'>
	I0910 17:51:32.941570   24502 main.go:141] libmachine: (ha-558946-m03)       <source network='mk-ha-558946'/>
	I0910 17:51:32.941579   24502 main.go:141] libmachine: (ha-558946-m03)       <model type='virtio'/>
	I0910 17:51:32.941588   24502 main.go:141] libmachine: (ha-558946-m03)     </interface>
	I0910 17:51:32.941597   24502 main.go:141] libmachine: (ha-558946-m03)     <interface type='network'>
	I0910 17:51:32.941611   24502 main.go:141] libmachine: (ha-558946-m03)       <source network='default'/>
	I0910 17:51:32.941619   24502 main.go:141] libmachine: (ha-558946-m03)       <model type='virtio'/>
	I0910 17:51:32.941643   24502 main.go:141] libmachine: (ha-558946-m03)     </interface>
	I0910 17:51:32.941670   24502 main.go:141] libmachine: (ha-558946-m03)     <serial type='pty'>
	I0910 17:51:32.941685   24502 main.go:141] libmachine: (ha-558946-m03)       <target port='0'/>
	I0910 17:51:32.941698   24502 main.go:141] libmachine: (ha-558946-m03)     </serial>
	I0910 17:51:32.941708   24502 main.go:141] libmachine: (ha-558946-m03)     <console type='pty'>
	I0910 17:51:32.941720   24502 main.go:141] libmachine: (ha-558946-m03)       <target type='serial' port='0'/>
	I0910 17:51:32.941731   24502 main.go:141] libmachine: (ha-558946-m03)     </console>
	I0910 17:51:32.941742   24502 main.go:141] libmachine: (ha-558946-m03)     <rng model='virtio'>
	I0910 17:51:32.941754   24502 main.go:141] libmachine: (ha-558946-m03)       <backend model='random'>/dev/random</backend>
	I0910 17:51:32.941763   24502 main.go:141] libmachine: (ha-558946-m03)     </rng>
	I0910 17:51:32.941783   24502 main.go:141] libmachine: (ha-558946-m03)     
	I0910 17:51:32.941800   24502 main.go:141] libmachine: (ha-558946-m03)     
	I0910 17:51:32.941818   24502 main.go:141] libmachine: (ha-558946-m03)   </devices>
	I0910 17:51:32.941889   24502 main.go:141] libmachine: (ha-558946-m03) </domain>
	I0910 17:51:32.941910   24502 main.go:141] libmachine: (ha-558946-m03) 
	I0910 17:51:32.948606   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:df:e2:10 in network default
	I0910 17:51:32.949137   24502 main.go:141] libmachine: (ha-558946-m03) Ensuring networks are active...
	I0910 17:51:32.949162   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:32.949767   24502 main.go:141] libmachine: (ha-558946-m03) Ensuring network default is active
	I0910 17:51:32.950076   24502 main.go:141] libmachine: (ha-558946-m03) Ensuring network mk-ha-558946 is active
	I0910 17:51:32.950451   24502 main.go:141] libmachine: (ha-558946-m03) Getting domain xml...
	I0910 17:51:32.951130   24502 main.go:141] libmachine: (ha-558946-m03) Creating domain...
	I0910 17:51:34.160910   24502 main.go:141] libmachine: (ha-558946-m03) Waiting to get IP...
	I0910 17:51:34.161915   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:34.162337   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:34.162365   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:34.162301   25238 retry.go:31] will retry after 192.308586ms: waiting for machine to come up
	I0910 17:51:34.356851   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:34.357348   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:34.357374   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:34.357310   25238 retry.go:31] will retry after 235.950538ms: waiting for machine to come up
	I0910 17:51:34.594621   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:34.595181   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:34.595210   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:34.595135   25238 retry.go:31] will retry after 319.216711ms: waiting for machine to come up
	I0910 17:51:34.915429   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:34.915849   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:34.915875   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:34.915823   25238 retry.go:31] will retry after 437.191559ms: waiting for machine to come up
	I0910 17:51:35.354134   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:35.354569   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:35.354596   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:35.354518   25238 retry.go:31] will retry after 527.344491ms: waiting for machine to come up
	I0910 17:51:35.883063   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:35.883454   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:35.883478   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:35.883416   25238 retry.go:31] will retry after 887.020425ms: waiting for machine to come up
	I0910 17:51:36.771488   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:36.771891   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:36.771913   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:36.771846   25238 retry.go:31] will retry after 747.567374ms: waiting for machine to come up
	I0910 17:51:37.520868   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:37.521285   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:37.521312   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:37.521233   25238 retry.go:31] will retry after 1.2299808s: waiting for machine to come up
	I0910 17:51:38.752317   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:38.752751   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:38.752770   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:38.752716   25238 retry.go:31] will retry after 1.636100072s: waiting for machine to come up
	I0910 17:51:40.391631   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:40.392063   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:40.392115   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:40.392040   25238 retry.go:31] will retry after 1.90887496s: waiting for machine to come up
	I0910 17:51:42.302712   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:42.303213   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:42.303247   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:42.303157   25238 retry.go:31] will retry after 2.44749237s: waiting for machine to come up
	I0910 17:51:44.751762   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:44.752142   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:44.752166   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:44.752104   25238 retry.go:31] will retry after 3.502593835s: waiting for machine to come up
	I0910 17:51:48.255721   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:48.256171   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:48.256197   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:48.256133   25238 retry.go:31] will retry after 3.604327927s: waiting for machine to come up
	I0910 17:51:51.864806   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:51.865324   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find current IP address of domain ha-558946-m03 in network mk-ha-558946
	I0910 17:51:51.865344   24502 main.go:141] libmachine: (ha-558946-m03) DBG | I0910 17:51:51.865291   25238 retry.go:31] will retry after 4.848421616s: waiting for machine to come up
	I0910 17:51:56.718037   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.718456   24502 main.go:141] libmachine: (ha-558946-m03) Found IP for machine: 192.168.39.241
	I0910 17:51:56.718475   24502 main.go:141] libmachine: (ha-558946-m03) Reserving static IP address...
	I0910 17:51:56.718485   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has current primary IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.718869   24502 main.go:141] libmachine: (ha-558946-m03) DBG | unable to find host DHCP lease matching {name: "ha-558946-m03", mac: "52:54:00:fd:d7:43", ip: "192.168.39.241"} in network mk-ha-558946
	I0910 17:51:56.788379   24502 main.go:141] libmachine: (ha-558946-m03) Reserved static IP address: 192.168.39.241
	I0910 17:51:56.788403   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Getting to WaitForSSH function...
	I0910 17:51:56.788411   24502 main.go:141] libmachine: (ha-558946-m03) Waiting for SSH to be available...
	I0910 17:51:56.790972   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.791496   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:56.791532   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.791559   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Using SSH client type: external
	I0910 17:51:56.791591   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa (-rw-------)
	I0910 17:51:56.791642   24502 main.go:141] libmachine: (ha-558946-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:51:56.791663   24502 main.go:141] libmachine: (ha-558946-m03) DBG | About to run SSH command:
	I0910 17:51:56.791687   24502 main.go:141] libmachine: (ha-558946-m03) DBG | exit 0
	I0910 17:51:56.921130   24502 main.go:141] libmachine: (ha-558946-m03) DBG | SSH cmd err, output: <nil>: 
	I0910 17:51:56.921382   24502 main.go:141] libmachine: (ha-558946-m03) KVM machine creation complete!
	I0910 17:51:56.921750   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetConfigRaw
	I0910 17:51:56.922281   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:56.922458   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:56.922628   24502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:51:56.922649   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:51:56.923876   24502 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:51:56.923893   24502 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:51:56.923902   24502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:51:56.923908   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:56.926213   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.926562   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:56.926584   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:56.926721   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:56.926869   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:56.927000   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:56.927111   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:56.927251   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:56.927437   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:56.927447   24502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:51:57.040456   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:51:57.040479   24502 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:51:57.040486   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.042980   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.043358   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.043385   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.043538   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.043731   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.043885   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.044034   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.044200   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:57.044384   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:57.044402   24502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:51:57.161681   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:51:57.161753   24502 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:51:57.161766   24502 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:51:57.161779   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetMachineName
	I0910 17:51:57.161989   24502 buildroot.go:166] provisioning hostname "ha-558946-m03"
	I0910 17:51:57.162010   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetMachineName
	I0910 17:51:57.162197   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.164708   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.165128   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.165150   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.165316   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.165500   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.165653   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.165781   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.165915   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:57.166103   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:57.166116   24502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-558946-m03 && echo "ha-558946-m03" | sudo tee /etc/hostname
	I0910 17:51:57.292370   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946-m03
	
	I0910 17:51:57.292397   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.294960   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.295384   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.295430   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.295562   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.295768   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.295939   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.296076   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.296241   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:57.296454   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:57.296481   24502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-558946-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-558946-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-558946-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:51:57.422005   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:51:57.422046   24502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:51:57.422068   24502 buildroot.go:174] setting up certificates
	I0910 17:51:57.422079   24502 provision.go:84] configureAuth start
	I0910 17:51:57.422089   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetMachineName
	I0910 17:51:57.422379   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:51:57.424806   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.425146   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.425171   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.425367   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.427893   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.428271   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.428306   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.428471   24502 provision.go:143] copyHostCerts
	I0910 17:51:57.428495   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:51:57.428528   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 17:51:57.428541   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:51:57.428611   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:51:57.428707   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:51:57.428732   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 17:51:57.428741   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:51:57.428777   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:51:57.428861   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:51:57.428885   24502 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 17:51:57.428895   24502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:51:57.428931   24502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:51:57.429001   24502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.ha-558946-m03 san=[127.0.0.1 192.168.39.241 ha-558946-m03 localhost minikube]
	I0910 17:51:57.596497   24502 provision.go:177] copyRemoteCerts
	I0910 17:51:57.596547   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:51:57.596566   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.599135   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.599560   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.599583   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.599719   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.599894   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.600029   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.600170   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:51:57.687797   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 17:51:57.687871   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 17:51:57.712766   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 17:51:57.712822   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:51:57.737408   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 17:51:57.737466   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:51:57.760783   24502 provision.go:87] duration metric: took 338.691491ms to configureAuth
	I0910 17:51:57.760805   24502 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:51:57.760984   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:51:57.761060   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:57.763601   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.763970   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:57.763992   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:57.764142   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:57.764346   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.764486   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:57.764602   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:57.764713   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:57.764878   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:57.764898   24502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 17:51:57.999720   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 17:51:57.999745   24502 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:51:57.999762   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetURL
	I0910 17:51:58.000979   24502 main.go:141] libmachine: (ha-558946-m03) DBG | Using libvirt version 6000000
	I0910 17:51:58.003464   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.003888   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.003928   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.004119   24502 main.go:141] libmachine: Docker is up and running!
	I0910 17:51:58.004137   24502 main.go:141] libmachine: Reticulating splines...
	I0910 17:51:58.004145   24502 client.go:171] duration metric: took 25.410481159s to LocalClient.Create
	I0910 17:51:58.004172   24502 start.go:167] duration metric: took 25.410545503s to libmachine.API.Create "ha-558946"
	I0910 17:51:58.004190   24502 start.go:293] postStartSetup for "ha-558946-m03" (driver="kvm2")
	I0910 17:51:58.004211   24502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:51:58.004234   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.004511   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:51:58.004537   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:58.006411   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.006739   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.006768   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.006862   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:58.007035   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.007160   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:58.007301   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:51:58.095627   24502 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:51:58.099727   24502 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:51:58.099754   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 17:51:58.099810   24502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 17:51:58.099881   24502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 17:51:58.099891   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 17:51:58.099966   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 17:51:58.109618   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:51:58.133163   24502 start.go:296] duration metric: took 128.957838ms for postStartSetup
	I0910 17:51:58.133210   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetConfigRaw
	I0910 17:51:58.133785   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:51:58.136248   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.136611   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.136641   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.136851   24502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:51:58.137157   24502 start.go:128] duration metric: took 25.562199754s to createHost
	I0910 17:51:58.137186   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:58.139516   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.139865   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.139899   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.140140   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:58.140342   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.140525   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.140683   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:58.140842   24502 main.go:141] libmachine: Using SSH client type: native
	I0910 17:51:58.140998   24502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0910 17:51:58.141008   24502 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:51:58.253804   24502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725990718.231347140
	
	I0910 17:51:58.253823   24502 fix.go:216] guest clock: 1725990718.231347140
	I0910 17:51:58.253834   24502 fix.go:229] Guest: 2024-09-10 17:51:58.23134714 +0000 UTC Remote: 2024-09-10 17:51:58.137174583 +0000 UTC m=+139.083003788 (delta=94.172557ms)
	I0910 17:51:58.253858   24502 fix.go:200] guest clock delta is within tolerance: 94.172557ms
	I0910 17:51:58.253864   24502 start.go:83] releasing machines lock for "ha-558946-m03", held for 25.679053483s
	I0910 17:51:58.253889   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.254123   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:51:58.256697   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.257037   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.257062   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.259134   24502 out.go:177] * Found network options:
	I0910 17:51:58.260296   24502 out.go:177]   - NO_PROXY=192.168.39.109,192.168.39.96
	W0910 17:51:58.261397   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 17:51:58.261420   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 17:51:58.261431   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.261924   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.262083   24502 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:51:58.262168   24502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:51:58.262195   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	W0910 17:51:58.262238   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0910 17:51:58.262266   24502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0910 17:51:58.262311   24502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 17:51:58.262324   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:51:58.264520   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.264679   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.264904   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.264930   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.265007   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:58.265027   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:58.265042   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:58.265219   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:51:58.265248   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.265370   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:51:58.265415   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:58.265517   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:51:58.265603   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:51:58.265673   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:51:58.503190   24502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:51:58.509913   24502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:51:58.509959   24502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:51:58.525557   24502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:51:58.525575   24502 start.go:495] detecting cgroup driver to use...
	I0910 17:51:58.525631   24502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:51:58.542691   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:51:58.555829   24502 docker.go:217] disabling cri-docker service (if available) ...
	I0910 17:51:58.555882   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 17:51:58.570042   24502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 17:51:58.583566   24502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 17:51:58.703410   24502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 17:51:58.867516   24502 docker.go:233] disabling docker service ...
	I0910 17:51:58.867584   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 17:51:58.882603   24502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 17:51:58.895218   24502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 17:51:59.015401   24502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 17:51:59.134538   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 17:51:59.149032   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:51:59.172805   24502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 17:51:59.172856   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.183466   24502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 17:51:59.183520   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.194762   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.205912   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.216137   24502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:51:59.226500   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.236758   24502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.255980   24502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 17:51:59.266336   24502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:51:59.275439   24502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 17:51:59.275486   24502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 17:51:59.287960   24502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:51:59.297846   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:51:59.420894   24502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 17:51:59.510574   24502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 17:51:59.510644   24502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 17:51:59.515408   24502 start.go:563] Will wait 60s for crictl version
	I0910 17:51:59.515462   24502 ssh_runner.go:195] Run: which crictl
	I0910 17:51:59.519396   24502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:51:59.558427   24502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 17:51:59.558497   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:51:59.586281   24502 ssh_runner.go:195] Run: crio --version
	I0910 17:51:59.615120   24502 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 17:51:59.616369   24502 out.go:177]   - env NO_PROXY=192.168.39.109
	I0910 17:51:59.617492   24502 out.go:177]   - env NO_PROXY=192.168.39.109,192.168.39.96
	I0910 17:51:59.618574   24502 main.go:141] libmachine: (ha-558946-m03) Calling .GetIP
	I0910 17:51:59.621409   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:59.621788   24502 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:51:59.621810   24502 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:51:59.622001   24502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:51:59.626206   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:51:59.638300   24502 mustload.go:65] Loading cluster: ha-558946
	I0910 17:51:59.638504   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:51:59.638790   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:51:59.638832   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:51:59.653007   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0910 17:51:59.653354   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:51:59.653761   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:51:59.653780   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:51:59.654069   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:51:59.654228   24502 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:51:59.655581   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:51:59.655844   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:51:59.655877   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:51:59.670783   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35023
	I0910 17:51:59.671114   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:51:59.671551   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:51:59.671573   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:51:59.671839   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:51:59.671995   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:51:59.672143   24502 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946 for IP: 192.168.39.241
	I0910 17:51:59.672156   24502 certs.go:194] generating shared ca certs ...
	I0910 17:51:59.672172   24502 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:51:59.672306   24502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 17:51:59.672362   24502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 17:51:59.672374   24502 certs.go:256] generating profile certs ...
	I0910 17:51:59.672472   24502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key
	I0910 17:51:59.672502   24502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.f8e1ed16
	I0910 17:51:59.672523   24502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.f8e1ed16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109 192.168.39.96 192.168.39.241 192.168.39.254]
	I0910 17:51:59.891804   24502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.f8e1ed16 ...
	I0910 17:51:59.891836   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.f8e1ed16: {Name:mkb1c81fb5736388426a997b999622f9986ab5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:51:59.892015   24502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.f8e1ed16 ...
	I0910 17:51:59.892028   24502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.f8e1ed16: {Name:mk0687690f8f2aa206b5e80a94279c0dd61cb82a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:51:59.892109   24502 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.f8e1ed16 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt
	I0910 17:51:59.892264   24502 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.f8e1ed16 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key
	I0910 17:51:59.892398   24502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key
	I0910 17:51:59.892416   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 17:51:59.892431   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 17:51:59.892446   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 17:51:59.892463   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 17:51:59.892478   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 17:51:59.892493   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 17:51:59.892510   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 17:51:59.892524   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 17:51:59.892580   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 17:51:59.892610   24502 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 17:51:59.892620   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 17:51:59.892645   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 17:51:59.892669   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:51:59.892694   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 17:51:59.892737   24502 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 17:51:59.892766   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:51:59.892782   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 17:51:59.892797   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 17:51:59.892831   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:51:59.895532   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:51:59.895879   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:51:59.895910   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:51:59.896038   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:51:59.896247   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:51:59.896385   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:51:59.896537   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:51:59.973391   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0910 17:51:59.979528   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0910 17:51:59.990634   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0910 17:51:59.994678   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0910 17:52:00.005371   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0910 17:52:00.009448   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0910 17:52:00.019328   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0910 17:52:00.023713   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0910 17:52:00.036607   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0910 17:52:00.040895   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0910 17:52:00.050615   24502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0910 17:52:00.054497   24502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0910 17:52:00.065961   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:52:00.090823   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:52:00.113838   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:52:00.136178   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 17:52:00.160954   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0910 17:52:00.184003   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 17:52:00.208085   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:52:00.232608   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:52:00.256893   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:52:00.281461   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 17:52:00.317710   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 17:52:00.343444   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0910 17:52:00.361375   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0910 17:52:00.378824   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0910 17:52:00.396074   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0910 17:52:00.413679   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0910 17:52:00.430615   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0910 17:52:00.446442   24502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0910 17:52:00.461538   24502 ssh_runner.go:195] Run: openssl version
	I0910 17:52:00.466958   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 17:52:00.476968   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 17:52:00.481150   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 17:52:00.481199   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 17:52:00.486724   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 17:52:00.496907   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 17:52:00.508954   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 17:52:00.513405   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 17:52:00.513447   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 17:52:00.518808   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 17:52:00.529379   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:52:00.539926   24502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:52:00.544356   24502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:52:00.544397   24502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:52:00.550049   24502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:52:00.560627   24502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:52:00.564698   24502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:52:00.564740   24502 kubeadm.go:934] updating node {m03 192.168.39.241 8443 v1.31.0 crio true true} ...
	I0910 17:52:00.564830   24502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-558946-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:52:00.564865   24502 kube-vip.go:115] generating kube-vip config ...
	I0910 17:52:00.564894   24502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 17:52:00.581257   24502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 17:52:00.581314   24502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 17:52:00.581377   24502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:52:00.591599   24502 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 17:52:00.591645   24502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 17:52:00.601750   24502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0910 17:52:00.601767   24502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0910 17:52:00.601773   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 17:52:00.601784   24502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0910 17:52:00.601800   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:52:00.601835   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 17:52:00.601803   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 17:52:00.601913   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 17:52:00.607662   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0910 17:52:00.607686   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 17:52:00.644344   24502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 17:52:00.644440   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0910 17:52:00.644476   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 17:52:00.644450   24502 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 17:52:00.682263   24502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0910 17:52:00.682297   24502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 17:52:01.415846   24502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0910 17:52:01.425508   24502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0910 17:52:01.442679   24502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:52:01.459730   24502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 17:52:01.476421   24502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0910 17:52:01.480443   24502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:52:01.492640   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:52:01.614523   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:52:01.631884   24502 host.go:66] Checking if "ha-558946" exists ...
	I0910 17:52:01.632293   24502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:52:01.632344   24502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:52:01.647603   24502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0910 17:52:01.648052   24502 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:52:01.648511   24502 main.go:141] libmachine: Using API Version  1
	I0910 17:52:01.648531   24502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:52:01.648890   24502 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:52:01.649067   24502 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:52:01.649248   24502 start.go:317] joinCluster: &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:52:01.649397   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0910 17:52:01.649422   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:52:01.652329   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:52:01.652829   24502 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:52:01.652865   24502 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:52:01.652992   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:52:01.653161   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:52:01.653298   24502 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:52:01.653431   24502 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:52:01.808761   24502 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:52:01.808812   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token up6txw.daqk8dai2qrj9189 --discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-558946-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443"
	I0910 17:52:24.318347   24502 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token up6txw.daqk8dai2qrj9189 --discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-558946-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443": (22.509504377s)
	I0910 17:52:24.318394   24502 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0910 17:52:25.036729   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-558946-m03 minikube.k8s.io/updated_at=2024_09_10T17_52_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=ha-558946 minikube.k8s.io/primary=false
	I0910 17:52:25.173195   24502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-558946-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0910 17:52:25.320357   24502 start.go:319] duration metric: took 23.67110398s to joinCluster
	I0910 17:52:25.320462   24502 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 17:52:25.320813   24502 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:52:25.321562   24502 out.go:177] * Verifying Kubernetes components...
	I0910 17:52:25.322687   24502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:52:25.604662   24502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:52:25.677954   24502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:52:25.678279   24502 kapi.go:59] client config for ha-558946: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.crt", KeyFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key", CAFile:"/home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2c360), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0910 17:52:25.678372   24502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.109:8443
	I0910 17:52:25.678726   24502 node_ready.go:35] waiting up to 6m0s for node "ha-558946-m03" to be "Ready" ...
	I0910 17:52:25.678834   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:25.678846   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:25.678859   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:25.678866   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:25.683264   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:52:26.179499   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:26.179516   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:26.179523   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:26.179526   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:26.183497   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:26.679503   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:26.679530   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:26.679542   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:26.679547   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:26.683176   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:27.179605   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:27.179630   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:27.179642   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:27.179646   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:27.182902   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:27.679414   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:27.679449   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:27.679460   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:27.679465   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:27.682139   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:27.682838   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:28.179114   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:28.179150   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:28.179159   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:28.179163   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:28.182339   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:28.679119   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:28.679141   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:28.679150   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:28.679154   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:28.683394   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:52:29.179684   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:29.179710   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:29.179721   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:29.179726   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:29.183442   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:29.679039   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:29.679059   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:29.679069   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:29.679075   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:29.681958   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:30.179860   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:30.179882   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:30.179891   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:30.179896   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:30.183936   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:52:30.184669   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:30.678973   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:30.678995   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:30.679004   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:30.679008   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:30.681651   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:31.179618   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:31.179637   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:31.179645   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:31.179649   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:31.182874   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:31.679712   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:31.679735   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:31.679743   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:31.679747   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:31.682917   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:32.179064   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:32.179083   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:32.179091   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:32.179094   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:32.181772   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:32.679179   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:32.679205   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:32.679216   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:32.679220   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:32.682216   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:32.682910   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:33.179832   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:33.179853   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:33.179864   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:33.179870   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:33.183368   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:33.679165   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:33.679186   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:33.679196   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:33.679200   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:33.682365   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:34.179457   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:34.179478   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:34.179486   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:34.179490   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:34.183209   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:34.679139   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:34.679158   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:34.679172   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:34.679180   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:34.682351   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:35.178948   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:35.178982   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:35.178991   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:35.178996   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:35.182175   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:35.182899   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:35.679534   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:35.679557   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:35.679568   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:35.679577   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:35.682491   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:36.179774   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:36.179805   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:36.179819   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:36.179825   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:36.183027   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:36.679808   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:36.679830   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:36.679837   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:36.679841   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:36.682433   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:37.179662   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:37.179681   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:37.179690   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:37.179694   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:37.183057   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:37.183575   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:37.679434   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:37.679463   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:37.679474   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:37.679482   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:37.683136   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:38.179047   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:38.179074   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:38.179084   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:38.179092   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:38.182641   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:38.679637   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:38.679659   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:38.679668   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:38.679677   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:38.682391   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:39.179642   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:39.179663   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:39.179674   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:39.179681   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:39.182807   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:39.678974   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:39.678994   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:39.679006   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:39.679012   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:39.682452   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:39.683029   24502 node_ready.go:53] node "ha-558946-m03" has status "Ready":"False"
	I0910 17:52:40.179028   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:40.179060   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.179068   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.179072   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.182089   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:40.679084   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:40.679109   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.679121   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.679127   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.682558   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:40.683011   24502 node_ready.go:49] node "ha-558946-m03" has status "Ready":"True"
	I0910 17:52:40.683025   24502 node_ready.go:38] duration metric: took 15.004282888s for node "ha-558946-m03" to be "Ready" ...
	I0910 17:52:40.683033   24502 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:52:40.683084   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:40.683093   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.683100   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.683103   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.688627   24502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 17:52:40.695199   24502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.695270   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-5pv7s
	I0910 17:52:40.695278   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.695285   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.695290   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.698284   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.698929   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:40.698945   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.698955   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.698959   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.701757   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.702238   24502 pod_ready.go:93] pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:40.702262   24502 pod_ready.go:82] duration metric: took 7.044635ms for pod "coredns-6f6b679f8f-5pv7s" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.702272   24502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.702329   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-fmcmc
	I0910 17:52:40.702339   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.702350   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.702357   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.704642   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.705371   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:40.705389   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.705398   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.705403   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.708139   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.709730   24502 pod_ready.go:93] pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:40.709746   24502 pod_ready.go:82] duration metric: took 7.467139ms for pod "coredns-6f6b679f8f-fmcmc" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.709754   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.709794   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946
	I0910 17:52:40.709802   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.709811   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.709817   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.711887   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.712429   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:40.712443   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.712450   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.712455   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.714656   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.715226   24502 pod_ready.go:93] pod "etcd-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:40.715243   24502 pod_ready.go:82] duration metric: took 5.48298ms for pod "etcd-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.715253   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.715309   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946-m02
	I0910 17:52:40.715320   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.715329   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.715338   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.718089   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.718540   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:40.718553   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.718560   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.718563   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.720665   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:40.721039   24502 pod_ready.go:93] pod "etcd-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:40.721052   24502 pod_ready.go:82] duration metric: took 5.792309ms for pod "etcd-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.721062   24502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:40.879457   24502 request.go:632] Waited for 158.329186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946-m03
	I0910 17:52:40.879530   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-ha-558946-m03
	I0910 17:52:40.879536   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:40.879544   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:40.879548   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:40.883322   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:41.079300   24502 request.go:632] Waited for 195.201107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:41.079364   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:41.079373   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.079382   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.079390   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.082201   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:41.082797   24502 pod_ready.go:93] pod "etcd-ha-558946-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:41.082817   24502 pod_ready.go:82] duration metric: took 361.747825ms for pod "etcd-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.082832   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.279076   24502 request.go:632] Waited for 196.180193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946
	I0910 17:52:41.279155   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946
	I0910 17:52:41.279160   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.279168   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.279172   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.282454   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:41.479328   24502 request.go:632] Waited for 196.33062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:41.479394   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:41.479401   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.479408   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.479415   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.482038   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:41.482626   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:41.482643   24502 pod_ready.go:82] duration metric: took 399.802605ms for pod "kube-apiserver-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.482656   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.679268   24502 request.go:632] Waited for 196.544015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m02
	I0910 17:52:41.679341   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m02
	I0910 17:52:41.679349   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.679359   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.679364   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.682512   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:41.879708   24502 request.go:632] Waited for 196.352723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:41.879758   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:41.879763   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:41.879769   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:41.879778   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:41.884152   24502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0910 17:52:41.884799   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:41.884816   24502 pod_ready.go:82] duration metric: took 402.153066ms for pod "kube-apiserver-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:41.884826   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.079984   24502 request.go:632] Waited for 195.073226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m03
	I0910 17:52:42.080046   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-558946-m03
	I0910 17:52:42.080053   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.080064   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.080074   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.083799   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:42.279999   24502 request.go:632] Waited for 195.304421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:42.280051   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:42.280058   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.280075   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.280097   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.283357   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:42.283965   24502 pod_ready.go:93] pod "kube-apiserver-ha-558946-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:42.283988   24502 pod_ready.go:82] duration metric: took 399.149137ms for pod "kube-apiserver-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.284004   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.480060   24502 request.go:632] Waited for 195.968031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946
	I0910 17:52:42.480174   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946
	I0910 17:52:42.480200   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.480214   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.480223   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.483063   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:42.680049   24502 request.go:632] Waited for 196.316999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:42.680132   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:42.680140   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.680149   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.680158   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.683053   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:42.683683   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:42.683699   24502 pod_ready.go:82] duration metric: took 399.684285ms for pod "kube-controller-manager-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.683708   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:42.879761   24502 request.go:632] Waited for 195.98885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m02
	I0910 17:52:42.879824   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m02
	I0910 17:52:42.879832   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:42.879843   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:42.879850   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:42.882761   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:43.079873   24502 request.go:632] Waited for 196.353903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:43.079928   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:43.079933   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.079940   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.079944   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.083556   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.084101   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:43.084123   24502 pod_ready.go:82] duration metric: took 400.407652ms for pod "kube-controller-manager-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.084137   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.279096   24502 request.go:632] Waited for 194.891277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m03
	I0910 17:52:43.279156   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-558946-m03
	I0910 17:52:43.279162   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.279172   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.279179   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.282580   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.480049   24502 request.go:632] Waited for 196.363721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:43.480172   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:43.480181   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.480201   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.480209   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.483483   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.484019   24502 pod_ready.go:93] pod "kube-controller-manager-ha-558946-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:43.484040   24502 pod_ready.go:82] duration metric: took 399.893727ms for pod "kube-controller-manager-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.484054   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8ldlx" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.680052   24502 request.go:632] Waited for 195.928284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8ldlx
	I0910 17:52:43.680147   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8ldlx
	I0910 17:52:43.680158   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.680169   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.680180   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.683455   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.879703   24502 request.go:632] Waited for 195.367895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:43.879753   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:43.879759   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:43.879769   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:43.879776   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:43.883182   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:43.883787   24502 pod_ready.go:93] pod "kube-proxy-8ldlx" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:43.883808   24502 pod_ready.go:82] duration metric: took 399.744881ms for pod "kube-proxy-8ldlx" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:43.883822   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gjqzx" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.079927   24502 request.go:632] Waited for 196.04605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjqzx
	I0910 17:52:44.079986   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjqzx
	I0910 17:52:44.079993   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.080006   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.080014   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.083263   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:44.279541   24502 request.go:632] Waited for 195.588211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:44.279608   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:44.279613   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.279621   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.279627   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.283206   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:44.284080   24502 pod_ready.go:93] pod "kube-proxy-gjqzx" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:44.284100   24502 pod_ready.go:82] duration metric: took 400.270829ms for pod "kube-proxy-gjqzx" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.284110   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xggtm" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.479085   24502 request.go:632] Waited for 194.915942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xggtm
	I0910 17:52:44.479149   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xggtm
	I0910 17:52:44.479154   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.479161   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.479168   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.483057   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:44.679214   24502 request.go:632] Waited for 195.228306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:44.679274   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:44.679281   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.679290   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.679305   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.687270   24502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 17:52:44.688060   24502 pod_ready.go:93] pod "kube-proxy-xggtm" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:44.688076   24502 pod_ready.go:82] duration metric: took 403.961027ms for pod "kube-proxy-xggtm" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.688085   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:44.880028   24502 request.go:632] Waited for 191.881814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946
	I0910 17:52:44.880103   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946
	I0910 17:52:44.880109   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:44.880117   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:44.880121   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:44.883793   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.080077   24502 request.go:632] Waited for 195.339736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:45.080123   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946
	I0910 17:52:45.080127   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.080134   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.080138   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.083879   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.084486   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:45.084502   24502 pod_ready.go:82] duration metric: took 396.410407ms for pod "kube-scheduler-ha-558946" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.084512   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.279567   24502 request.go:632] Waited for 194.994058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m02
	I0910 17:52:45.279641   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m02
	I0910 17:52:45.279651   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.279658   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.279665   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.282904   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.479741   24502 request.go:632] Waited for 196.217693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:45.479821   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m02
	I0910 17:52:45.479831   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.479842   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.479848   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.483127   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.483766   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:45.483787   24502 pod_ready.go:82] duration metric: took 399.26798ms for pod "kube-scheduler-ha-558946-m02" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.483800   24502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.679766   24502 request.go:632] Waited for 195.896259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m03
	I0910 17:52:45.679837   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-558946-m03
	I0910 17:52:45.679848   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.679859   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.679869   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.682853   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:45.879886   24502 request.go:632] Waited for 196.363607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:45.879966   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/ha-558946-m03
	I0910 17:52:45.879974   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.879982   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.879988   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.883181   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:45.883825   24502 pod_ready.go:93] pod "kube-scheduler-ha-558946-m03" in "kube-system" namespace has status "Ready":"True"
	I0910 17:52:45.883841   24502 pod_ready.go:82] duration metric: took 400.030658ms for pod "kube-scheduler-ha-558946-m03" in "kube-system" namespace to be "Ready" ...
	I0910 17:52:45.883851   24502 pod_ready.go:39] duration metric: took 5.20080914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:52:45.883866   24502 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:52:45.883921   24502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:52:45.900126   24502 api_server.go:72] duration metric: took 20.579632142s to wait for apiserver process to appear ...
	I0910 17:52:45.900147   24502 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:52:45.900170   24502 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0910 17:52:45.904231   24502 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I0910 17:52:45.904284   24502 round_trippers.go:463] GET https://192.168.39.109:8443/version
	I0910 17:52:45.904289   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:45.904295   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:45.904302   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:45.905085   24502 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0910 17:52:45.905134   24502 api_server.go:141] control plane version: v1.31.0
	I0910 17:52:45.905147   24502 api_server.go:131] duration metric: took 4.993418ms to wait for apiserver health ...
	I0910 17:52:45.905153   24502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:52:46.079501   24502 request.go:632] Waited for 174.288817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:46.079566   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:46.079572   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:46.079581   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:46.079588   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:46.085235   24502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0910 17:52:46.091584   24502 system_pods.go:59] 24 kube-system pods found
	I0910 17:52:46.091608   24502 system_pods.go:61] "coredns-6f6b679f8f-5pv7s" [e75ceddc-7576-45f6-8b80-2071bc7fbef8] Running
	I0910 17:52:46.091613   24502 system_pods.go:61] "coredns-6f6b679f8f-fmcmc" [0d79d296-3ee7-4b7b-8869-e45465da70ff] Running
	I0910 17:52:46.091617   24502 system_pods.go:61] "etcd-ha-558946" [d99a9237-7866-40f1-95d6-c6488183479e] Running
	I0910 17:52:46.091621   24502 system_pods.go:61] "etcd-ha-558946-m02" [d22427c5-1548-4bd2-b1c1-5a6a4353077a] Running
	I0910 17:52:46.091625   24502 system_pods.go:61] "etcd-ha-558946-m03" [6d01b402-952c-428d-be87-e461cc07de36] Running
	I0910 17:52:46.091629   24502 system_pods.go:61] "kindnet-mshf2" [cec27b40-9e1f-4c27-9d18-422e75dbc252] Running
	I0910 17:52:46.091635   24502 system_pods.go:61] "kindnet-n8n67" [019cf933-bf89-485d-a837-bf8bbedbc0df] Running
	I0910 17:52:46.091639   24502 system_pods.go:61] "kindnet-sfr7m" [31ccb06a-6f76-4a18-894c-707993f766e5] Running
	I0910 17:52:46.091643   24502 system_pods.go:61] "kube-apiserver-ha-558946" [74003dbd-903b-48de-b85f-973654d0d58e] Running
	I0910 17:52:46.091647   24502 system_pods.go:61] "kube-apiserver-ha-558946-m02" [9136cd3a-a68e-4167-808d-61b33978cf45] Running
	I0910 17:52:46.091650   24502 system_pods.go:61] "kube-apiserver-ha-558946-m03" [ee0b10ae-52c5-4bb9-8eb2-b9921279eab7] Running
	I0910 17:52:46.091654   24502 system_pods.go:61] "kube-controller-manager-ha-558946" [82453b26-31b3-4c6e-8e37-26eb141923fc] Running
	I0910 17:52:46.091659   24502 system_pods.go:61] "kube-controller-manager-ha-558946-m02" [d658071a-4335-4933-88c8-4d2cfccb40df] Running
	I0910 17:52:46.091663   24502 system_pods.go:61] "kube-controller-manager-ha-558946-m03" [935f6235-0c9e-4204-b1ca-c75b2e0946b8] Running
	I0910 17:52:46.091668   24502 system_pods.go:61] "kube-proxy-8ldlx" [a5c5acdd-77fe-432b-80a1-34fd11389f6e] Running
	I0910 17:52:46.091671   24502 system_pods.go:61] "kube-proxy-gjqzx" [35a3fe57-a2d6-4134-8205-ce5c8d09b707] Running
	I0910 17:52:46.091675   24502 system_pods.go:61] "kube-proxy-xggtm" [347371e4-83b7-474c-8924-d33c479d736a] Running
	I0910 17:52:46.091678   24502 system_pods.go:61] "kube-scheduler-ha-558946" [e99973ac-5718-4769-99e3-282c3c25b8f8] Running
	I0910 17:52:46.091684   24502 system_pods.go:61] "kube-scheduler-ha-558946-m02" [6c57c232-f86e-417c-b3a6-867b3ed443bf] Running
	I0910 17:52:46.091686   24502 system_pods.go:61] "kube-scheduler-ha-558946-m03" [60a36ce7-25b1-4800-86cc-bab6e5516d91] Running
	I0910 17:52:46.091692   24502 system_pods.go:61] "kube-vip-ha-558946" [810f85ef-6900-456e-877e-095d38286613] Running
	I0910 17:52:46.091695   24502 system_pods.go:61] "kube-vip-ha-558946-m02" [59850a02-4ce3-47dc-a250-f18c0fd9533c] Running
	I0910 17:52:46.091700   24502 system_pods.go:61] "kube-vip-ha-558946-m03" [f77d0e8b-731a-4bcb-b175-08686fe82852] Running
	I0910 17:52:46.091703   24502 system_pods.go:61] "storage-provisioner" [baf5cd7e-5266-4d55-bd6c-459257baa463] Running
	I0910 17:52:46.091709   24502 system_pods.go:74] duration metric: took 186.550993ms to wait for pod list to return data ...
	I0910 17:52:46.091718   24502 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:52:46.279119   24502 request.go:632] Waited for 187.318054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0910 17:52:46.279187   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0910 17:52:46.279202   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:46.279215   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:46.279226   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:46.282981   24502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0910 17:52:46.283105   24502 default_sa.go:45] found service account: "default"
	I0910 17:52:46.283119   24502 default_sa.go:55] duration metric: took 191.39626ms for default service account to be created ...
	I0910 17:52:46.283129   24502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:52:46.479679   24502 request.go:632] Waited for 196.462097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:46.479732   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0910 17:52:46.479737   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:46.479744   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:46.479748   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:46.487264   24502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0910 17:52:46.494671   24502 system_pods.go:86] 24 kube-system pods found
	I0910 17:52:46.494696   24502 system_pods.go:89] "coredns-6f6b679f8f-5pv7s" [e75ceddc-7576-45f6-8b80-2071bc7fbef8] Running
	I0910 17:52:46.494703   24502 system_pods.go:89] "coredns-6f6b679f8f-fmcmc" [0d79d296-3ee7-4b7b-8869-e45465da70ff] Running
	I0910 17:52:46.494707   24502 system_pods.go:89] "etcd-ha-558946" [d99a9237-7866-40f1-95d6-c6488183479e] Running
	I0910 17:52:46.494711   24502 system_pods.go:89] "etcd-ha-558946-m02" [d22427c5-1548-4bd2-b1c1-5a6a4353077a] Running
	I0910 17:52:46.494714   24502 system_pods.go:89] "etcd-ha-558946-m03" [6d01b402-952c-428d-be87-e461cc07de36] Running
	I0910 17:52:46.494718   24502 system_pods.go:89] "kindnet-mshf2" [cec27b40-9e1f-4c27-9d18-422e75dbc252] Running
	I0910 17:52:46.494721   24502 system_pods.go:89] "kindnet-n8n67" [019cf933-bf89-485d-a837-bf8bbedbc0df] Running
	I0910 17:52:46.494725   24502 system_pods.go:89] "kindnet-sfr7m" [31ccb06a-6f76-4a18-894c-707993f766e5] Running
	I0910 17:52:46.494728   24502 system_pods.go:89] "kube-apiserver-ha-558946" [74003dbd-903b-48de-b85f-973654d0d58e] Running
	I0910 17:52:46.494731   24502 system_pods.go:89] "kube-apiserver-ha-558946-m02" [9136cd3a-a68e-4167-808d-61b33978cf45] Running
	I0910 17:52:46.494735   24502 system_pods.go:89] "kube-apiserver-ha-558946-m03" [ee0b10ae-52c5-4bb9-8eb2-b9921279eab7] Running
	I0910 17:52:46.494739   24502 system_pods.go:89] "kube-controller-manager-ha-558946" [82453b26-31b3-4c6e-8e37-26eb141923fc] Running
	I0910 17:52:46.494743   24502 system_pods.go:89] "kube-controller-manager-ha-558946-m02" [d658071a-4335-4933-88c8-4d2cfccb40df] Running
	I0910 17:52:46.494745   24502 system_pods.go:89] "kube-controller-manager-ha-558946-m03" [935f6235-0c9e-4204-b1ca-c75b2e0946b8] Running
	I0910 17:52:46.494748   24502 system_pods.go:89] "kube-proxy-8ldlx" [a5c5acdd-77fe-432b-80a1-34fd11389f6e] Running
	I0910 17:52:46.494751   24502 system_pods.go:89] "kube-proxy-gjqzx" [35a3fe57-a2d6-4134-8205-ce5c8d09b707] Running
	I0910 17:52:46.494755   24502 system_pods.go:89] "kube-proxy-xggtm" [347371e4-83b7-474c-8924-d33c479d736a] Running
	I0910 17:52:46.494761   24502 system_pods.go:89] "kube-scheduler-ha-558946" [e99973ac-5718-4769-99e3-282c3c25b8f8] Running
	I0910 17:52:46.494764   24502 system_pods.go:89] "kube-scheduler-ha-558946-m02" [6c57c232-f86e-417c-b3a6-867b3ed443bf] Running
	I0910 17:52:46.494770   24502 system_pods.go:89] "kube-scheduler-ha-558946-m03" [60a36ce7-25b1-4800-86cc-bab6e5516d91] Running
	I0910 17:52:46.494773   24502 system_pods.go:89] "kube-vip-ha-558946" [810f85ef-6900-456e-877e-095d38286613] Running
	I0910 17:52:46.494776   24502 system_pods.go:89] "kube-vip-ha-558946-m02" [59850a02-4ce3-47dc-a250-f18c0fd9533c] Running
	I0910 17:52:46.494779   24502 system_pods.go:89] "kube-vip-ha-558946-m03" [f77d0e8b-731a-4bcb-b175-08686fe82852] Running
	I0910 17:52:46.494782   24502 system_pods.go:89] "storage-provisioner" [baf5cd7e-5266-4d55-bd6c-459257baa463] Running
	I0910 17:52:46.494790   24502 system_pods.go:126] duration metric: took 211.653589ms to wait for k8s-apps to be running ...
	I0910 17:52:46.494797   24502 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:52:46.494836   24502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:52:46.510455   24502 system_svc.go:56] duration metric: took 15.650736ms WaitForService to wait for kubelet
	I0910 17:52:46.510482   24502 kubeadm.go:582] duration metric: took 21.189989541s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:52:46.510501   24502 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:52:46.680122   24502 request.go:632] Waited for 169.552712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes
	I0910 17:52:46.680186   24502 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes
	I0910 17:52:46.680194   24502 round_trippers.go:469] Request Headers:
	I0910 17:52:46.680205   24502 round_trippers.go:473]     Accept: application/json, */*
	I0910 17:52:46.680215   24502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0910 17:52:46.683113   24502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0910 17:52:46.684305   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:52:46.684326   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:52:46.684341   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:52:46.684346   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:52:46.684352   24502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:52:46.684356   24502 node_conditions.go:123] node cpu capacity is 2
	I0910 17:52:46.684360   24502 node_conditions.go:105] duration metric: took 173.854209ms to run NodePressure ...
	I0910 17:52:46.684369   24502 start.go:241] waiting for startup goroutines ...
	I0910 17:52:46.684390   24502 start.go:255] writing updated cluster config ...
	I0910 17:52:46.684700   24502 ssh_runner.go:195] Run: rm -f paused
	I0910 17:52:46.734959   24502 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 17:52:46.737419   24502 out.go:177] * Done! kubectl is now configured to use "ha-558946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.205423596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b538c84e-a20c-481a-bad3-d8ea302040cf name=/runtime.v1.RuntimeService/Version
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.206940100Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6d44931-b566-44b3-8418-1bae53d027a9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.207737511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991034207708489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6d44931-b566-44b3-8418-1bae53d027a9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.208363330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cd2502d-113a-48cc-a1cf-4e1a744f768a name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.208441635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cd2502d-113a-48cc-a1cf-4e1a744f768a name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.208772794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725990770310688052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990640053624379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990639993301041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17eb3a40b6abab09b8551b0deec9412b8856647710a0560601915c527b3992a4,PodSandboxId:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725990639919153851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725990628186458960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172599062
5854752018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e,PodSandboxId:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172599061600
2583127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725990614321988798,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725990614273270200,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d,PodSandboxId:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725990614301936106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509,PodSandboxId:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725990614292937014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cd2502d-113a-48cc-a1cf-4e1a744f768a name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.252035879Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c39454c-a970-4a15-a6d8-993ccbadfe54 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.252219395Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c39454c-a970-4a15-a6d8-993ccbadfe54 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.253811146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfe490b9-20ed-4540-ba23-c314cff93161 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.254500362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991034254469597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfe490b9-20ed-4540-ba23-c314cff93161 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.255330160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2aeadf31-dffe-4b2d-96fa-71fd4ec09052 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.255409236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2aeadf31-dffe-4b2d-96fa-71fd4ec09052 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.255689278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725990770310688052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990640053624379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990639993301041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17eb3a40b6abab09b8551b0deec9412b8856647710a0560601915c527b3992a4,PodSandboxId:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725990639919153851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725990628186458960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172599062
5854752018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e,PodSandboxId:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172599061600
2583127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725990614321988798,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725990614273270200,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d,PodSandboxId:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725990614301936106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509,PodSandboxId:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725990614292937014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2aeadf31-dffe-4b2d-96fa-71fd4ec09052 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.279977923Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c390766e-52d5-4a1a-baaa-779e1dbcc7ff name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.280426558Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-2t4ms,Uid:7344679f-13fd-466b-ad26-a77a20b9386a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990768239865651,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:52:47.624646475Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:baf5cd7e-5266-4d55-bd6c-459257baa463,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1725990639756981493,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-10T17:50:39.436412919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-fmcmc,Uid:0d79d296-3ee7-4b7b-8869-e45465da70ff,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990639747190194,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:50:39.437934920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-5pv7s,Uid:e75ceddc-7576-45f6-8b80-2071bc7fbef8,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1725990639736852026,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:50:39.427210674Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&PodSandboxMetadata{Name:kube-proxy-gjqzx,Uid:35a3fe57-a2d6-4134-8205-ce5c8d09b707,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990625512332240,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-10T17:50:25.198679385Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&PodSandboxMetadata{Name:kindnet-n8n67,Uid:019cf933-bf89-485d-a837-bf8bbedbc0df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990625496466909,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T17:50:25.180886140Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-558946,Uid:1b2abe11d64857285f0708440a498977,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1725990614095023226,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{kubernetes.io/config.hash: 1b2abe11d64857285f0708440a498977,kubernetes.io/config.seen: 2024-09-10T17:50:13.412970035Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-558946,Uid:5a3bcac99226bc257a0bbe4358f2cf25,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990614093651232,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apis
erver.advertise-address.endpoint: 192.168.39.109:8443,kubernetes.io/config.hash: 5a3bcac99226bc257a0bbe4358f2cf25,kubernetes.io/config.seen: 2024-09-10T17:50:13.412967348Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-558946,Uid:adbd273a78c889b66df701581a530b4b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990614090281764,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: adbd273a78c889b66df701581a530b4b,kubernetes.io/config.seen: 2024-09-10T17:50:13.412969313Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Met
adata:&PodSandboxMetadata{Name:etcd-ha-558946,Uid:066fe90d6e5504c167c416bab3c626a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990614066672529,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: 066fe90d6e5504c167c416bab3c626a5,kubernetes.io/config.seen: 2024-09-10T17:50:13.412964198Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-558946,Uid:f4cb243a9afd92bb7fd74751dcfef866,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725990614065842327,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f4cb243a9afd92bb7fd74751dcfef866,kubernetes.io/config.seen: 2024-09-10T17:50:13.412968416Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c390766e-52d5-4a1a-baaa-779e1dbcc7ff name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.281484008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72cc1765-7679-4ce7-be97-a4263a9b63d8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.281577553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72cc1765-7679-4ce7-be97-a4263a9b63d8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.281917212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725990770310688052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990640053624379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990639993301041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17eb3a40b6abab09b8551b0deec9412b8856647710a0560601915c527b3992a4,PodSandboxId:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725990639919153851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725990628186458960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172599062
5854752018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e,PodSandboxId:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172599061600
2583127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725990614321988798,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725990614273270200,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d,PodSandboxId:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725990614301936106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509,PodSandboxId:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725990614292937014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72cc1765-7679-4ce7-be97-a4263a9b63d8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.301013523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ba95c0d-ad0f-4d01-bfed-bdc2345ce221 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.301145685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ba95c0d-ad0f-4d01-bfed-bdc2345ce221 name=/runtime.v1.RuntimeService/Version
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.302031214Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46a8b82a-cf46-443d-9391-336c7f296429 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.302612995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991034302592196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46a8b82a-cf46-443d-9391-336c7f296429 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.303138458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1e98fee-9045-41bc-b7b7-61126282e434 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.303210167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1e98fee-9045-41bc-b7b7-61126282e434 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 17:57:14 ha-558946 crio[668]: time="2024-09-10 17:57:14.303448024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725990770310688052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990640053624379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725990639993301041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17eb3a40b6abab09b8551b0deec9412b8856647710a0560601915c527b3992a4,PodSandboxId:d537c4783b42f41ae617b03e4040fca8199d546278cc85c863053c57d550a62a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1725990639919153851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725990628186458960,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172599062
5854752018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e,PodSandboxId:495ea13704d281433654df37f08debf37fb57d23ea819e8921d38da1638c28b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172599061600
2583127,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2abe11d64857285f0708440a498977,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725990614321988798,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725990614273270200,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d,PodSandboxId:6db7b892990fcb24ccee38ed9453630501a203d6327636187a42e68db6103419,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725990614301936106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509,PodSandboxId:56c5eaaefd9dc6c292acb872e35cd14ae77a0efc534ef81a62500216f906e161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725990614292937014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1e98fee-9045-41bc-b7b7-61126282e434 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f35f5f9c0297       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   4704ca681891e       busybox-7dff88458-2t4ms
	142a15832796a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   1c4e9776e0278       coredns-6f6b679f8f-5pv7s
	6899c9efcedba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   434931d96929c       coredns-6f6b679f8f-fmcmc
	17eb3a40b6aba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   d537c4783b42f       storage-provisioner
	e119a0b88cc46       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   70857c92d854f       kindnet-n8n67
	1668374a3d17c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   718077b7bfae6       kube-proxy-gjqzx
	284b2d71723b7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   495ea13704d28       kube-vip-ha-558946
	edfccb881d415       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   8c5d88f2921ad       kube-scheduler-ha-558946
	a97a13adca4b5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   6db7b892990fc       kube-apiserver-ha-558946
	4056c90198fe8       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   56c5eaaefd9dc       kube-controller-manager-ha-558946
	5ebc6afb00309       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   ca3c0af433ced       etcd-ha-558946
	
	
	==> coredns [142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557] <==
	[INFO] 10.244.1.2:38446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121244s
	[INFO] 10.244.1.2:40680 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000102424s
	[INFO] 10.244.1.2:37614 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000113138s
	[INFO] 10.244.1.2:55352 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001904347s
	[INFO] 10.244.0.4:44314 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011515239s
	[INFO] 10.244.0.4:59105 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214169s
	[INFO] 10.244.2.2:52223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140107s
	[INFO] 10.244.2.2:51288 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170301s
	[INFO] 10.244.2.2:43443 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001154075s
	[INFO] 10.244.2.2:45133 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069754s
	[INFO] 10.244.2.2:57378 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111864s
	[INFO] 10.244.1.2:55758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134542s
	[INFO] 10.244.1.2:40786 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001892504s
	[INFO] 10.244.1.2:39596 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093152s
	[INFO] 10.244.1.2:38058 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000155632s
	[INFO] 10.244.0.4:32898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106177s
	[INFO] 10.244.0.4:54445 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077161s
	[INFO] 10.244.0.4:39012 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000203844s
	[INFO] 10.244.2.2:51010 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117281s
	[INFO] 10.244.2.2:51174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141181s
	[INFO] 10.244.2.2:55393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185738s
	[INFO] 10.244.2.2:37830 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000216713s
	[INFO] 10.244.2.2:45453 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139889s
	[INFO] 10.244.1.2:46063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168728s
	[INFO] 10.244.1.2:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116561s
	
	
	==> coredns [6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8] <==
	[INFO] 10.244.0.4:59904 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140397s
	[INFO] 10.244.0.4:35340 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122725s
	[INFO] 10.244.0.4:49436 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125155s
	[INFO] 10.244.0.4:34813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146204s
	[INFO] 10.244.2.2:34474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001792363s
	[INFO] 10.244.2.2:38827 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095844s
	[INFO] 10.244.2.2:52413 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066168s
	[INFO] 10.244.1.2:60142 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001269934s
	[INFO] 10.244.1.2:54320 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135946s
	[INFO] 10.244.1.2:51279 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144222s
	[INFO] 10.244.1.2:40290 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149426s
	[INFO] 10.244.0.4:53110 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120693s
	[INFO] 10.244.2.2:42194 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245782s
	[INFO] 10.244.2.2:59001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012574s
	[INFO] 10.244.1.2:60266 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150897s
	[INFO] 10.244.1.2:57758 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013393s
	[INFO] 10.244.1.2:37225 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099566s
	[INFO] 10.244.1.2:49900 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113789s
	[INFO] 10.244.0.4:37306 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000237921s
	[INFO] 10.244.0.4:36705 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168887s
	[INFO] 10.244.0.4:34074 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013369s
	[INFO] 10.244.0.4:34879 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107543s
	[INFO] 10.244.2.2:60365 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000255288s
	[INFO] 10.244.1.2:49914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123225s
	[INFO] 10.244.1.2:59420 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122155s
	
	
	==> describe nodes <==
	Name:               ha-558946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_50_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:50:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:57:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:52:53 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:52:53 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:52:53 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:52:53 +0000   Tue, 10 Sep 2024 17:50:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-558946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6888e6da1bdd45dda1c087615a5c1996
	  System UUID:                6888e6da-1bdd-45dd-a1c0-87615a5c1996
	  Boot ID:                    a2579398-c9ae-48e0-a407-b08542361a94
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2t4ms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-6f6b679f8f-5pv7s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m49s
	  kube-system                 coredns-6f6b679f8f-fmcmc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m49s
	  kube-system                 etcd-ha-558946                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m56s
	  kube-system                 kindnet-n8n67                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m49s
	  kube-system                 kube-apiserver-ha-558946             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-controller-manager-ha-558946    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 kube-proxy-gjqzx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 kube-scheduler-ha-558946             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 kube-vip-ha-558946                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m48s                kube-proxy       
	  Normal  NodeAllocatableEnforced  7m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m1s (x8 over 7m1s)  kubelet          Node ha-558946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m1s (x8 over 7m1s)  kubelet          Node ha-558946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m1s (x7 over 7m1s)  kubelet          Node ha-558946 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m54s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m54s                kubelet          Node ha-558946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m54s                kubelet          Node ha-558946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m54s                kubelet          Node ha-558946 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m50s                node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal  NodeReady                6m35s                kubelet          Node ha-558946 status is now: NodeReady
	  Normal  RegisteredNode           5m55s                node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal  RegisteredNode           4m44s                node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	
	
	Name:               ha-558946-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_51_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:51:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:53:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 10 Sep 2024 17:53:13 +0000   Tue, 10 Sep 2024 17:54:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 10 Sep 2024 17:53:13 +0000   Tue, 10 Sep 2024 17:54:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 10 Sep 2024 17:53:13 +0000   Tue, 10 Sep 2024 17:54:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 10 Sep 2024 17:53:13 +0000   Tue, 10 Sep 2024 17:54:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-558946-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db1a36bf29714274bd4e3db4349b13e5
	  System UUID:                db1a36bf-2971-4274-bd4e-3db4349b13e5
	  Boot ID:                    a1e6458f-d889-45f0-9111-7341b37855d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xnl8m                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 etcd-ha-558946-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m1s
	  kube-system                 kindnet-sfr7m                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m4s
	  kube-system                 kube-apiserver-ha-558946-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-controller-manager-ha-558946-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-proxy-xggtm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-ha-558946-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-vip-ha-558946-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m4s (x8 over 6m4s)  kubelet          Node ha-558946-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x8 over 6m4s)  kubelet          Node ha-558946-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x7 over 6m4s)  kubelet          Node ha-558946-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m                   node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           5m55s                node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           4m44s                node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  NodeNotReady             2m39s                node-controller  Node ha-558946-m02 status is now: NodeNotReady
	
	
	Name:               ha-558946-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_52_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:52:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:57:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:52:51 +0000   Tue, 10 Sep 2024 17:52:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:52:51 +0000   Tue, 10 Sep 2024 17:52:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:52:51 +0000   Tue, 10 Sep 2024 17:52:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:52:51 +0000   Tue, 10 Sep 2024 17:52:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-558946-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bf15e91753540d5b2e0f1553e9cfa68
	  System UUID:                8bf15e91-7535-40d5-b2e0-f1553e9cfa68
	  Boot ID:                    1d53ab20-8447-45b7-9abb-9b9612c466dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-szkr7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 etcd-ha-558946-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m52s
	  kube-system                 kindnet-mshf2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m54s
	  kube-system                 kube-apiserver-ha-558946-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-controller-manager-ha-558946-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-proxy-8ldlx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-scheduler-ha-558946-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-vip-ha-558946-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-558946-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-558946-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-558946-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	
	
	Name:               ha-558946-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_53_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:53:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:57:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:53:51 +0000   Tue, 10 Sep 2024 17:53:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:53:51 +0000   Tue, 10 Sep 2024 17:53:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:53:51 +0000   Tue, 10 Sep 2024 17:53:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:53:51 +0000   Tue, 10 Sep 2024 17:53:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-558946-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aded0f54a0334cb59bab04e35bcf99b0
	  System UUID:                aded0f54-a033-4cb5-9bab-04e35bcf99b0
	  Boot ID:                    1351708d-4980-4151-bfae-1b9049afb79c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7kzcw       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m54s
	  kube-system                 kube-proxy-mk6xt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m54s)  kubelet          Node ha-558946-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m54s)  kubelet          Node ha-558946-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m54s)  kubelet          Node ha-558946-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal  NodeReady                3m35s                  kubelet          Node ha-558946-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep10 17:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050705] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039929] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.788388] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.469715] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.561365] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep10 17:50] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.058035] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055902] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.190997] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.121180] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.267314] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.918739] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.478653] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.062428] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.320707] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.078655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.553971] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.155608] kauditd_printk_skb: 38 callbacks suppressed
	[Sep10 17:51] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa] <==
	{"level":"warn","ts":"2024-09-10T17:57:14.461547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.486153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.490232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.563741Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.571411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.576552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.586145Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.586197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.592912Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.599129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.602650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.605538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.611454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.616554Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.621865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.624875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.627528Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.634377Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.634696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.637581Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.642393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.647822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.667120Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.674453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-10T17:57:14.686277Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"22872ffef731375a","from":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:57:14 up 7 min,  0 users,  load average: 0.38, 0.40, 0.20
	Linux ha-558946 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d] <==
	I0910 17:56:39.332698       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:56:49.336254       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:56:49.336438       1 main.go:299] handling current node
	I0910 17:56:49.336489       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:56:49.336522       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:56:49.336717       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:56:49.336764       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:56:49.336880       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:56:49.336931       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:56:59.336231       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:56:59.336372       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:56:59.336534       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:56:59.336556       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:56:59.336614       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:56:59.336632       1 main.go:299] handling current node
	I0910 17:56:59.336678       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:56:59.336694       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:57:09.337904       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:57:09.337998       1 main.go:299] handling current node
	I0910 17:57:09.338030       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:57:09.338172       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:57:09.338338       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:57:09.338365       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:57:09.338428       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:57:09.338447       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d] <==
	I0910 17:50:20.369262       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 17:50:20.392833       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0910 17:50:20.409676       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 17:50:25.139036       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0910 17:50:25.200259       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0910 17:52:21.636891       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.163µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0910 17:52:21.636925       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0910 17:52:21.638695       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0910 17:52:21.639913       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0910 17:52:21.641267       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.714645ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0910 17:52:51.449514       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57658: use of closed network connection
	E0910 17:52:51.627377       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57686: use of closed network connection
	E0910 17:52:51.816302       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57704: use of closed network connection
	E0910 17:52:52.014303       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57722: use of closed network connection
	E0910 17:52:52.199349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57752: use of closed network connection
	E0910 17:52:52.392530       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57776: use of closed network connection
	E0910 17:52:52.572461       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57794: use of closed network connection
	E0910 17:52:52.755547       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57814: use of closed network connection
	E0910 17:52:52.934221       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57832: use of closed network connection
	E0910 17:52:53.222422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57862: use of closed network connection
	E0910 17:52:53.394049       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57884: use of closed network connection
	E0910 17:52:53.576589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57906: use of closed network connection
	E0910 17:52:53.744810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57922: use of closed network connection
	E0910 17:52:53.920034       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57942: use of closed network connection
	E0910 17:52:54.085568       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57952: use of closed network connection
	
	
	==> kube-controller-manager [4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509] <==
	I0910 17:53:20.701380       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-558946-m04" podCIDRs=["10.244.3.0/24"]
	I0910 17:53:20.701620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:20.703776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:20.730928       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:20.983932       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:21.403938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:24.445958       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-558946-m04"
	I0910 17:53:24.469703       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:24.994041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:25.023975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:25.510283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:25.584729       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:30.878599       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:39.819592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-558946-m04"
	I0910 17:53:39.819773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:39.834972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:40.009388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:53:51.518945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 17:54:35.538503       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-558946-m04"
	I0910 17:54:35.542258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m02"
	I0910 17:54:35.568467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m02"
	I0910 17:54:35.609829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.148374ms"
	I0910 17:54:35.609955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.326µs"
	I0910 17:54:39.569032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m02"
	I0910 17:54:40.778882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m02"
	
	
	==> kube-proxy [1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:50:26.217741       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:50:26.246303       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.109"]
	E0910 17:50:26.246439       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:50:26.302452       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:50:26.302542       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:50:26.302583       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:50:26.305035       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:50:26.305345       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:50:26.305506       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:50:26.307212       1 config.go:197] "Starting service config controller"
	I0910 17:50:26.307266       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:50:26.307302       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:50:26.307317       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:50:26.308183       1 config.go:326] "Starting node config controller"
	I0910 17:50:26.308271       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:50:26.407679       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 17:50:26.407768       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:50:26.409170       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc] <==
	W0910 17:50:18.630667       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 17:50:18.630755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.651190       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 17:50:18.651333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.664238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 17:50:18.664306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.749872       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:50:18.749932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.754538       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 17:50:18.754610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.775133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:50:18.775251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.783301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 17:50:18.783579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:50:18.783311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 17:50:18.783701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 17:50:20.762780       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 17:53:20.783017       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7kzcw\": pod kindnet-7kzcw is already assigned to node \"ha-558946-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7kzcw" node="ha-558946-m04"
	E0910 17:53:20.783217       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a925295e-bc22-4154-850e-79962508c7ac(kube-system/kindnet-7kzcw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7kzcw"
	E0910 17:53:20.783245       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7kzcw\": pod kindnet-7kzcw is already assigned to node \"ha-558946-m04\"" pod="kube-system/kindnet-7kzcw"
	I0910 17:53:20.783283       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7kzcw" node="ha-558946-m04"
	E0910 17:53:20.926971       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9xbp8\": pod kindnet-9xbp8 is already assigned to node \"ha-558946-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9xbp8" node="ha-558946-m04"
	E0910 17:53:20.927165       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d228e8b7-bd1d-442c-bf6a-2240d8c2ac04(kube-system/kindnet-9xbp8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9xbp8"
	E0910 17:53:20.927360       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9xbp8\": pod kindnet-9xbp8 is already assigned to node \"ha-558946-m04\"" pod="kube-system/kindnet-9xbp8"
	I0910 17:53:20.927386       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9xbp8" node="ha-558946-m04"
	
	
	==> kubelet <==
	Sep 10 17:55:40 ha-558946 kubelet[1318]: E0910 17:55:40.410691    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990940410012375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:40 ha-558946 kubelet[1318]: E0910 17:55:40.411346    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990940410012375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:50 ha-558946 kubelet[1318]: E0910 17:55:50.414043    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990950413461728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:55:50 ha-558946 kubelet[1318]: E0910 17:55:50.414435    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990950413461728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:00 ha-558946 kubelet[1318]: E0910 17:56:00.415948    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990960415570032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:00 ha-558946 kubelet[1318]: E0910 17:56:00.415984    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990960415570032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:10 ha-558946 kubelet[1318]: E0910 17:56:10.417278    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990970416781710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:10 ha-558946 kubelet[1318]: E0910 17:56:10.417619    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990970416781710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:20 ha-558946 kubelet[1318]: E0910 17:56:20.301862    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 17:56:20 ha-558946 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 17:56:20 ha-558946 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 17:56:20 ha-558946 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 17:56:20 ha-558946 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 17:56:20 ha-558946 kubelet[1318]: E0910 17:56:20.419299    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990980418917252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:20 ha-558946 kubelet[1318]: E0910 17:56:20.419321    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990980418917252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:30 ha-558946 kubelet[1318]: E0910 17:56:30.421935    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990990420952243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:30 ha-558946 kubelet[1318]: E0910 17:56:30.423521    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725990990420952243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:40 ha-558946 kubelet[1318]: E0910 17:56:40.430436    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991000430191789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:40 ha-558946 kubelet[1318]: E0910 17:56:40.430480    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991000430191789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:50 ha-558946 kubelet[1318]: E0910 17:56:50.432554    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991010432181449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:56:50 ha-558946 kubelet[1318]: E0910 17:56:50.432584    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991010432181449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:57:00 ha-558946 kubelet[1318]: E0910 17:57:00.435397    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991020435000591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:57:00 ha-558946 kubelet[1318]: E0910 17:57:00.435470    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991020435000591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:57:10 ha-558946 kubelet[1318]: E0910 17:57:10.436847    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991030436503634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 17:57:10 ha-558946 kubelet[1318]: E0910 17:57:10.437229    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991030436503634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-558946 -n ha-558946
helpers_test.go:261: (dbg) Run:  kubectl --context ha-558946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (389.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-558946 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-558946 -v=7 --alsologtostderr
E0910 17:58:56.538799   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-558946 -v=7 --alsologtostderr: exit status 82 (2m1.904785991s)

                                                
                                                
-- stdout --
	* Stopping node "ha-558946-m04"  ...
	* Stopping node "ha-558946-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:57:16.086880   30116 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:57:16.086989   30116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:57:16.086998   30116 out.go:358] Setting ErrFile to fd 2...
	I0910 17:57:16.087002   30116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:57:16.087153   30116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:57:16.087364   30116 out.go:352] Setting JSON to false
	I0910 17:57:16.087443   30116 mustload.go:65] Loading cluster: ha-558946
	I0910 17:57:16.087769   30116 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:57:16.087849   30116 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:57:16.088064   30116 mustload.go:65] Loading cluster: ha-558946
	I0910 17:57:16.088206   30116 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:57:16.088238   30116 stop.go:39] StopHost: ha-558946-m04
	I0910 17:57:16.088621   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:16.088673   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:16.103923   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0910 17:57:16.104338   30116 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:16.104840   30116 main.go:141] libmachine: Using API Version  1
	I0910 17:57:16.104862   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:16.105249   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:16.107438   30116 out.go:177] * Stopping node "ha-558946-m04"  ...
	I0910 17:57:16.109026   30116 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0910 17:57:16.109057   30116 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 17:57:16.109291   30116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0910 17:57:16.109323   30116 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 17:57:16.112746   30116 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:16.113228   30116 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:53:09 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 17:57:16.113258   30116 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 17:57:16.113424   30116 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 17:57:16.113585   30116 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 17:57:16.113725   30116 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 17:57:16.113839   30116 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 17:57:16.201271   30116 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0910 17:57:16.255019   30116 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0910 17:57:16.308132   30116 main.go:141] libmachine: Stopping "ha-558946-m04"...
	I0910 17:57:16.308172   30116 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:57:16.309605   30116 main.go:141] libmachine: (ha-558946-m04) Calling .Stop
	I0910 17:57:16.313443   30116 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 0/120
	I0910 17:57:17.535918   30116 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 17:57:17.537147   30116 main.go:141] libmachine: Machine "ha-558946-m04" was stopped.
	I0910 17:57:17.537163   30116 stop.go:75] duration metric: took 1.428150432s to stop
	I0910 17:57:17.537180   30116 stop.go:39] StopHost: ha-558946-m03
	I0910 17:57:17.537485   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:57:17.537521   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:57:17.552112   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37303
	I0910 17:57:17.552601   30116 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:57:17.553016   30116 main.go:141] libmachine: Using API Version  1
	I0910 17:57:17.553033   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:57:17.553398   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:57:17.555322   30116 out.go:177] * Stopping node "ha-558946-m03"  ...
	I0910 17:57:17.556395   30116 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0910 17:57:17.556416   30116 main.go:141] libmachine: (ha-558946-m03) Calling .DriverName
	I0910 17:57:17.556645   30116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0910 17:57:17.556666   30116 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHHostname
	I0910 17:57:17.559768   30116 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:17.560195   30116 main.go:141] libmachine: (ha-558946-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:d7:43", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:51:47 +0000 UTC Type:0 Mac:52:54:00:fd:d7:43 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-558946-m03 Clientid:01:52:54:00:fd:d7:43}
	I0910 17:57:17.560225   30116 main.go:141] libmachine: (ha-558946-m03) DBG | domain ha-558946-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:fd:d7:43 in network mk-ha-558946
	I0910 17:57:17.560300   30116 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHPort
	I0910 17:57:17.560450   30116 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHKeyPath
	I0910 17:57:17.560585   30116 main.go:141] libmachine: (ha-558946-m03) Calling .GetSSHUsername
	I0910 17:57:17.560731   30116 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m03/id_rsa Username:docker}
	I0910 17:57:17.648128   30116 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0910 17:57:17.706766   30116 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0910 17:57:17.766596   30116 main.go:141] libmachine: Stopping "ha-558946-m03"...
	I0910 17:57:17.766628   30116 main.go:141] libmachine: (ha-558946-m03) Calling .GetState
	I0910 17:57:17.768128   30116 main.go:141] libmachine: (ha-558946-m03) Calling .Stop
	I0910 17:57:17.771397   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 0/120
	I0910 17:57:18.772728   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 1/120
	I0910 17:57:19.774141   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 2/120
	I0910 17:57:20.775449   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 3/120
	I0910 17:57:21.776855   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 4/120
	I0910 17:57:22.779290   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 5/120
	I0910 17:57:23.781639   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 6/120
	I0910 17:57:24.783238   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 7/120
	I0910 17:57:25.784674   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 8/120
	I0910 17:57:26.786002   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 9/120
	I0910 17:57:27.787162   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 10/120
	I0910 17:57:28.788702   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 11/120
	I0910 17:57:29.789955   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 12/120
	I0910 17:57:30.791797   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 13/120
	I0910 17:57:31.793145   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 14/120
	I0910 17:57:32.794784   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 15/120
	I0910 17:57:33.796327   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 16/120
	I0910 17:57:34.797714   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 17/120
	I0910 17:57:35.799118   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 18/120
	I0910 17:57:36.800506   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 19/120
	I0910 17:57:37.801653   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 20/120
	I0910 17:57:38.803229   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 21/120
	I0910 17:57:39.804480   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 22/120
	I0910 17:57:40.805980   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 23/120
	I0910 17:57:41.807535   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 24/120
	I0910 17:57:42.809521   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 25/120
	I0910 17:57:43.810847   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 26/120
	I0910 17:57:44.812474   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 27/120
	I0910 17:57:45.813875   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 28/120
	I0910 17:57:46.815345   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 29/120
	I0910 17:57:47.816509   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 30/120
	I0910 17:57:48.817826   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 31/120
	I0910 17:57:49.819653   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 32/120
	I0910 17:57:50.820891   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 33/120
	I0910 17:57:51.822163   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 34/120
	I0910 17:57:52.823931   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 35/120
	I0910 17:57:53.824986   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 36/120
	I0910 17:57:54.826289   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 37/120
	I0910 17:57:55.827367   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 38/120
	I0910 17:57:56.828881   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 39/120
	I0910 17:57:57.830504   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 40/120
	I0910 17:57:58.831811   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 41/120
	I0910 17:57:59.832971   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 42/120
	I0910 17:58:00.834333   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 43/120
	I0910 17:58:01.835569   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 44/120
	I0910 17:58:02.837122   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 45/120
	I0910 17:58:03.838317   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 46/120
	I0910 17:58:04.839621   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 47/120
	I0910 17:58:05.840729   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 48/120
	I0910 17:58:06.842276   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 49/120
	I0910 17:58:07.844070   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 50/120
	I0910 17:58:08.845282   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 51/120
	I0910 17:58:09.846650   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 52/120
	I0910 17:58:10.847970   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 53/120
	I0910 17:58:11.849244   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 54/120
	I0910 17:58:12.850818   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 55/120
	I0910 17:58:13.852063   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 56/120
	I0910 17:58:14.853417   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 57/120
	I0910 17:58:15.854674   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 58/120
	I0910 17:58:16.855991   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 59/120
	I0910 17:58:17.857991   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 60/120
	I0910 17:58:18.859323   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 61/120
	I0910 17:58:19.860550   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 62/120
	I0910 17:58:20.861881   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 63/120
	I0910 17:58:21.863426   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 64/120
	I0910 17:58:22.865254   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 65/120
	I0910 17:58:23.866615   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 66/120
	I0910 17:58:24.867970   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 67/120
	I0910 17:58:25.869252   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 68/120
	I0910 17:58:26.870434   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 69/120
	I0910 17:58:27.871953   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 70/120
	I0910 17:58:28.873310   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 71/120
	I0910 17:58:29.874533   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 72/120
	I0910 17:58:30.875917   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 73/120
	I0910 17:58:31.877160   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 74/120
	I0910 17:58:32.878749   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 75/120
	I0910 17:58:33.879978   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 76/120
	I0910 17:58:34.881436   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 77/120
	I0910 17:58:35.882975   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 78/120
	I0910 17:58:36.885196   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 79/120
	I0910 17:58:37.886714   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 80/120
	I0910 17:58:38.888313   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 81/120
	I0910 17:58:39.889684   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 82/120
	I0910 17:58:40.891441   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 83/120
	I0910 17:58:41.892745   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 84/120
	I0910 17:58:42.893990   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 85/120
	I0910 17:58:43.895349   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 86/120
	I0910 17:58:44.896764   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 87/120
	I0910 17:58:45.898531   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 88/120
	I0910 17:58:46.899748   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 89/120
	I0910 17:58:47.901725   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 90/120
	I0910 17:58:48.902973   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 91/120
	I0910 17:58:49.904288   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 92/120
	I0910 17:58:50.905783   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 93/120
	I0910 17:58:51.906898   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 94/120
	I0910 17:58:52.908532   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 95/120
	I0910 17:58:53.909817   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 96/120
	I0910 17:58:54.911281   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 97/120
	I0910 17:58:55.912603   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 98/120
	I0910 17:58:56.913791   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 99/120
	I0910 17:58:57.915746   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 100/120
	I0910 17:58:58.917155   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 101/120
	I0910 17:58:59.918364   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 102/120
	I0910 17:59:00.919603   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 103/120
	I0910 17:59:01.920814   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 104/120
	I0910 17:59:02.922188   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 105/120
	I0910 17:59:03.923643   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 106/120
	I0910 17:59:04.925019   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 107/120
	I0910 17:59:05.926531   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 108/120
	I0910 17:59:06.927785   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 109/120
	I0910 17:59:07.929390   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 110/120
	I0910 17:59:08.931515   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 111/120
	I0910 17:59:09.932915   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 112/120
	I0910 17:59:10.934210   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 113/120
	I0910 17:59:11.935625   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 114/120
	I0910 17:59:12.937396   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 115/120
	I0910 17:59:13.938742   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 116/120
	I0910 17:59:14.940086   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 117/120
	I0910 17:59:15.941989   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 118/120
	I0910 17:59:16.943345   30116 main.go:141] libmachine: (ha-558946-m03) Waiting for machine to stop 119/120
	I0910 17:59:17.944302   30116 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0910 17:59:17.944358   30116 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0910 17:59:17.946135   30116 out.go:201] 
	W0910 17:59:17.947385   30116 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0910 17:59:17.947398   30116 out.go:270] * 
	* 
	W0910 17:59:17.950472   30116 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 17:59:17.951664   30116 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-558946 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-558946 --wait=true -v=7 --alsologtostderr
E0910 17:59:24.241767   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:01:35.171320   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:02:58.236068   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-558946 --wait=true -v=7 --alsologtostderr: (4m25.281461175s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-558946
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-558946 -n ha-558946
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-558946 logs -n 25: (1.804895594s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m02:/home/docker/cp-test_ha-558946-m03_ha-558946-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m02 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04:/home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m04 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp testdata/cp-test.txt                                                | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1374294018/001/cp-test_ha-558946-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946:/home/docker/cp-test_ha-558946-m04_ha-558946.txt                       |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946 sudo cat                                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946.txt                                 |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m02:/home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m02 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03:/home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m03 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-558946 node stop m02 -v=7                                                     | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-558946 node start m02 -v=7                                                    | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-558946 -v=7                                                           | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-558946 -v=7                                                                | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-558946 --wait=true -v=7                                                    | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:59 UTC | 10 Sep 24 18:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-558946                                                                | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 18:03 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:59:17
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:59:17.996107   30598 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:59:17.996381   30598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:59:17.996395   30598 out.go:358] Setting ErrFile to fd 2...
	I0910 17:59:17.996402   30598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:59:17.996571   30598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:59:17.997167   30598 out.go:352] Setting JSON to false
	I0910 17:59:17.998168   30598 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2510,"bootTime":1725988648,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:59:17.998251   30598 start.go:139] virtualization: kvm guest
	I0910 17:59:18.000603   30598 out.go:177] * [ha-558946] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:59:18.002223   30598 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:59:18.002225   30598 notify.go:220] Checking for updates...
	I0910 17:59:18.004661   30598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:59:18.005863   30598 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:59:18.006960   30598 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:59:18.008085   30598 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:59:18.009266   30598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:59:18.010749   30598 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:59:18.010834   30598 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:59:18.011225   30598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:59:18.011278   30598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:59:18.026475   30598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39219
	I0910 17:59:18.026869   30598 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:59:18.027526   30598 main.go:141] libmachine: Using API Version  1
	I0910 17:59:18.027552   30598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:59:18.027946   30598 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:59:18.028135   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:59:18.062430   30598 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 17:59:18.063510   30598 start.go:297] selected driver: kvm2
	I0910 17:59:18.063527   30598 start.go:901] validating driver "kvm2" against &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:59:18.063712   30598 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:59:18.064056   30598 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:59:18.064131   30598 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:59:18.079062   30598 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:59:18.079759   30598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:59:18.079797   30598 cni.go:84] Creating CNI manager for ""
	I0910 17:59:18.079808   30598 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0910 17:59:18.079875   30598 start.go:340] cluster config:
	{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:59:18.080038   30598 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:59:18.082493   30598 out.go:177] * Starting "ha-558946" primary control-plane node in "ha-558946" cluster
	I0910 17:59:18.083654   30598 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:59:18.083698   30598 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:59:18.083705   30598 cache.go:56] Caching tarball of preloaded images
	I0910 17:59:18.083795   30598 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:59:18.083812   30598 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:59:18.083929   30598 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:59:18.084196   30598 start.go:360] acquireMachinesLock for ha-558946: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:59:18.084260   30598 start.go:364] duration metric: took 44.442µs to acquireMachinesLock for "ha-558946"
	I0910 17:59:18.084277   30598 start.go:96] Skipping create...Using existing machine configuration
	I0910 17:59:18.084288   30598 fix.go:54] fixHost starting: 
	I0910 17:59:18.084642   30598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:59:18.084681   30598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:59:18.098486   30598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0910 17:59:18.098911   30598 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:59:18.099415   30598 main.go:141] libmachine: Using API Version  1
	I0910 17:59:18.099439   30598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:59:18.099720   30598 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:59:18.099919   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:59:18.100061   30598 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:59:18.101597   30598 fix.go:112] recreateIfNeeded on ha-558946: state=Running err=<nil>
	W0910 17:59:18.101619   30598 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 17:59:18.103365   30598 out.go:177] * Updating the running kvm2 "ha-558946" VM ...
	I0910 17:59:18.104475   30598 machine.go:93] provisionDockerMachine start ...
	I0910 17:59:18.104491   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:59:18.104693   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.107144   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.107654   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.107669   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.107926   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.108079   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.108224   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.108350   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.108511   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 17:59:18.108717   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:59:18.108729   30598 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 17:59:18.218106   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946
	
	I0910 17:59:18.218146   30598 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:59:18.218380   30598 buildroot.go:166] provisioning hostname "ha-558946"
	I0910 17:59:18.218406   30598 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:59:18.218611   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.221293   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.221639   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.221658   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.221794   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.221956   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.222113   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.222300   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.222455   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 17:59:18.222620   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:59:18.222631   30598 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-558946 && echo "ha-558946" | sudo tee /etc/hostname
	I0910 17:59:18.349873   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946
	
	I0910 17:59:18.349900   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.352462   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.352824   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.352851   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.352983   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.353176   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.353397   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.353587   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.353768   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 17:59:18.353958   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:59:18.353983   30598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-558946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-558946/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-558946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:59:18.461957   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:59:18.461987   30598 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:59:18.462018   30598 buildroot.go:174] setting up certificates
	I0910 17:59:18.462026   30598 provision.go:84] configureAuth start
	I0910 17:59:18.462037   30598 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:59:18.462326   30598 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:59:18.464679   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.465086   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.465112   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.465232   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.467334   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.467656   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.467681   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.467803   30598 provision.go:143] copyHostCerts
	I0910 17:59:18.467832   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:59:18.467884   30598 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 17:59:18.467894   30598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:59:18.467973   30598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:59:18.468073   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:59:18.468100   30598 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 17:59:18.468110   30598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:59:18.468150   30598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:59:18.468206   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:59:18.468230   30598 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 17:59:18.468239   30598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:59:18.468275   30598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:59:18.468349   30598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.ha-558946 san=[127.0.0.1 192.168.39.109 ha-558946 localhost minikube]
	I0910 17:59:18.599928   30598 provision.go:177] copyRemoteCerts
	I0910 17:59:18.599985   30598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:59:18.600004   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.602648   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.602959   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.602993   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.603179   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.603338   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.603499   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.603617   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:59:18.687689   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 17:59:18.687773   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:59:18.714065   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 17:59:18.714131   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0910 17:59:18.743470   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 17:59:18.743534   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 17:59:18.768716   30598 provision.go:87] duration metric: took 306.678576ms to configureAuth
	I0910 17:59:18.768736   30598 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:59:18.768923   30598 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:59:18.768984   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.771487   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.771890   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.771928   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.772106   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.772318   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.772514   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.772654   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.772820   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 17:59:18.773012   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:59:18.773030   30598 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:00:49.524020   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:00:49.524048   30598 machine.go:96] duration metric: took 1m31.419560916s to provisionDockerMachine
	I0910 18:00:49.524061   30598 start.go:293] postStartSetup for "ha-558946" (driver="kvm2")
	I0910 18:00:49.524071   30598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:00:49.524085   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.524394   30598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:00:49.524419   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.527295   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.527764   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.527790   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.527931   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.528111   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.528263   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.528434   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 18:00:49.613129   30598 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:00:49.617146   30598 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:00:49.617164   30598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:00:49.617216   30598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:00:49.617283   30598 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:00:49.617293   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 18:00:49.617372   30598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:00:49.627023   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:00:49.651466   30598 start.go:296] duration metric: took 127.39527ms for postStartSetup
	I0910 18:00:49.651530   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.651802   30598 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0910 18:00:49.651828   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.654626   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.655000   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.655023   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.655199   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.655380   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.655549   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.655709   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	W0910 18:00:49.740067   30598 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0910 18:00:49.740093   30598 fix.go:56] duration metric: took 1m31.655811019s for fixHost
	I0910 18:00:49.740112   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.742739   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.743109   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.743137   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.743267   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.743470   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.743605   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.743700   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.743822   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 18:00:49.743979   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 18:00:49.743990   30598 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:00:49.850374   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725991249.805675171
	
	I0910 18:00:49.850399   30598 fix.go:216] guest clock: 1725991249.805675171
	I0910 18:00:49.850409   30598 fix.go:229] Guest: 2024-09-10 18:00:49.805675171 +0000 UTC Remote: 2024-09-10 18:00:49.740099817 +0000 UTC m=+91.778943016 (delta=65.575354ms)
	I0910 18:00:49.850433   30598 fix.go:200] guest clock delta is within tolerance: 65.575354ms
	I0910 18:00:49.850439   30598 start.go:83] releasing machines lock for "ha-558946", held for 1m31.766168686s
	I0910 18:00:49.850458   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.850726   30598 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 18:00:49.853352   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.853773   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.853804   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.853947   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.854483   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.854672   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.854769   30598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:00:49.854813   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.854873   30598 ssh_runner.go:195] Run: cat /version.json
	I0910 18:00:49.854895   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.857378   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.857709   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.857781   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.857819   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.857951   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.858138   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.858199   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.858222   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.858322   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.858497   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.858497   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 18:00:49.858614   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.858755   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.858876   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 18:00:49.938305   30598 ssh_runner.go:195] Run: systemctl --version
	I0910 18:00:49.961089   30598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:00:50.123504   30598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:00:50.129983   30598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:00:50.130037   30598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:00:50.139294   30598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 18:00:50.139316   30598 start.go:495] detecting cgroup driver to use...
	I0910 18:00:50.139367   30598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:00:50.156064   30598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:00:50.169400   30598 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:00:50.169453   30598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:00:50.183120   30598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:00:50.197572   30598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:00:50.347189   30598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:00:50.500164   30598 docker.go:233] disabling docker service ...
	I0910 18:00:50.500232   30598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:00:50.519863   30598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:00:50.534419   30598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:00:50.678225   30598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:00:50.839930   30598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:00:50.854950   30598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:00:50.873219   30598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:00:50.873284   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.883371   30598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:00:50.883422   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.893946   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.904305   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.914987   30598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:00:50.925757   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.936216   30598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.947374   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.957493   30598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:00:50.967431   30598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:00:50.977128   30598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:00:51.141132   30598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:00:51.359681   30598 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:00:51.359769   30598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:00:51.365265   30598 start.go:563] Will wait 60s for crictl version
	I0910 18:00:51.365319   30598 ssh_runner.go:195] Run: which crictl
	I0910 18:00:51.369292   30598 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:00:51.412354   30598 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:00:51.412443   30598 ssh_runner.go:195] Run: crio --version
	I0910 18:00:51.444894   30598 ssh_runner.go:195] Run: crio --version
	I0910 18:00:51.478266   30598 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:00:51.479386   30598 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 18:00:51.481963   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:51.482301   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:51.482327   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:51.482517   30598 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 18:00:51.487420   30598 kubeadm.go:883] updating cluster {Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:00:51.487612   30598 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:00:51.487672   30598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:00:51.529965   30598 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:00:51.529986   30598 crio.go:433] Images already preloaded, skipping extraction
	I0910 18:00:51.530032   30598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:00:51.565475   30598 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:00:51.565496   30598 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:00:51.565504   30598 kubeadm.go:934] updating node { 192.168.39.109 8443 v1.31.0 crio true true} ...
	I0910 18:00:51.565601   30598 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-558946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:00:51.565658   30598 ssh_runner.go:195] Run: crio config
	I0910 18:00:51.624935   30598 cni.go:84] Creating CNI manager for ""
	I0910 18:00:51.624954   30598 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0910 18:00:51.624970   30598 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:00:51.624991   30598 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-558946 NodeName:ha-558946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:00:51.625154   30598 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-558946"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:00:51.625180   30598 kube-vip.go:115] generating kube-vip config ...
	I0910 18:00:51.625222   30598 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 18:00:51.637492   30598 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 18:00:51.637583   30598 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 18:00:51.637631   30598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:00:51.647138   30598 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:00:51.647189   30598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0910 18:00:51.656377   30598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0910 18:00:51.672707   30598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:00:51.688415   30598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0910 18:00:51.704846   30598 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 18:00:51.721742   30598 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0910 18:00:51.725705   30598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:00:51.871355   30598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:00:51.886087   30598 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946 for IP: 192.168.39.109
	I0910 18:00:51.886126   30598 certs.go:194] generating shared ca certs ...
	I0910 18:00:51.886145   30598 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:00:51.886318   30598 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:00:51.886374   30598 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:00:51.886388   30598 certs.go:256] generating profile certs ...
	I0910 18:00:51.886489   30598 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key
	I0910 18:00:51.886523   30598 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.38a11416
	I0910 18:00:51.886551   30598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.38a11416 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109 192.168.39.96 192.168.39.241 192.168.39.254]
	I0910 18:00:52.140635   30598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.38a11416 ...
	I0910 18:00:52.140669   30598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.38a11416: {Name:mk08913af0cdeb71c169c88b43462bb77ddac860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:00:52.140848   30598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.38a11416 ...
	I0910 18:00:52.140861   30598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.38a11416: {Name:mk8e47ac795705402ab5bb9615c3b69d125b73ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:00:52.140950   30598 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.38a11416 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt
	I0910 18:00:52.141111   30598 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.38a11416 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key
	I0910 18:00:52.141251   30598 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key
	I0910 18:00:52.141267   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:00:52.141287   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:00:52.141303   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:00:52.141318   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:00:52.141334   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:00:52.141349   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:00:52.141363   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:00:52.141375   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:00:52.141429   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:00:52.141463   30598 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:00:52.141478   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:00:52.141508   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:00:52.141533   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:00:52.141560   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:00:52.141600   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:00:52.141631   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 18:00:52.141647   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:00:52.141663   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 18:00:52.142210   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:00:52.168458   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:00:52.191790   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:00:52.215143   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:00:52.238405   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0910 18:00:52.262460   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:00:52.288651   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:00:52.312566   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:00:52.336090   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:00:52.360210   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:00:52.384087   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:00:52.407158   30598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:00:52.423104   30598 ssh_runner.go:195] Run: openssl version
	I0910 18:00:52.428929   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:00:52.439481   30598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:00:52.444185   30598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:00:52.444229   30598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:00:52.449783   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:00:52.458965   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:00:52.469437   30598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:00:52.473744   30598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:00:52.473779   30598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:00:52.479239   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:00:52.488304   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:00:52.498573   30598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:00:52.502888   30598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:00:52.502928   30598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:00:52.508437   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:00:52.517503   30598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:00:52.521996   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:00:52.527496   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:00:52.532849   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:00:52.538163   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:00:52.543867   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:00:52.549158   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:00:52.558344   30598 kubeadm.go:392] StartCluster: {Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:00:52.558444   30598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:00:52.558486   30598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:00:52.628849   30598 cri.go:89] found id: "76252b520e2e2ef7bed8846d0750cacf3bd574fc6f7c3662f0e367e820690317"
	I0910 18:00:52.628871   30598 cri.go:89] found id: "915faa9c083e42d87148d930b63d2760a0666c3b6af5efa1b22adaffcc7a4875"
	I0910 18:00:52.628876   30598 cri.go:89] found id: "5bdf2bcf00f8265018e407f2babfe0d87b9d40e5399bac6ae2db8ca05366d76f"
	I0910 18:00:52.628879   30598 cri.go:89] found id: "839bf8fe43954c8f890e2c72c1cdd9e7f7ea8b844dfb1726e564e776771c6e18"
	I0910 18:00:52.628882   30598 cri.go:89] found id: "142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557"
	I0910 18:00:52.628885   30598 cri.go:89] found id: "6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8"
	I0910 18:00:52.628887   30598 cri.go:89] found id: "e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d"
	I0910 18:00:52.628889   30598 cri.go:89] found id: "1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c"
	I0910 18:00:52.628892   30598 cri.go:89] found id: "284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e"
	I0910 18:00:52.628898   30598 cri.go:89] found id: "edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc"
	I0910 18:00:52.628900   30598 cri.go:89] found id: "a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d"
	I0910 18:00:52.628903   30598 cri.go:89] found id: "4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509"
	I0910 18:00:52.628905   30598 cri.go:89] found id: "5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa"
	I0910 18:00:52.628908   30598 cri.go:89] found id: ""
	I0910 18:00:52.628945   30598 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.909996744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991423909963637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf343741-6c04-404b-aea2-8b4316752645 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.910576863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57873304-ad40-4000-8f23-d57b803ab01f name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.910687241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57873304-ad40-4000-8f23-d57b803ab01f name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.911665290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6b2ffc5626f3775e9fb373cfb1b3350651ba8735b9190ab8dda1f3dbe9f1a30,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725991313294806034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725991299321524381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725991299307616134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93ca6fe5c1facf9e80ebe158beabecc3d1a8c32be0c454e3c89a52fa894422,PodSandboxId:6b30967ed8560b2e42d3fbb805e0e4872e4cefd3ac86bfe9d100580327591709,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725991291620475922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10efff272db119d6c6a88edf63354fef6d528a7cadb208799e802d1e8affa0b7,PodSandboxId:34b147bddaf58b77c20fb3fdd19c44187b9cbd620c70e7fa7178429411bf12ae,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725991273167566246,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1dc86399c32e0e26e2d6ddcbf3bc74,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71,PodSandboxId:a619eb4fb7dfbdd628ddf6c50657882a796c6ff57fdb8afae83432d53d414f6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725991258474946983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:7eeb82852ac4491fb60767bb4952b2f14f7290f33ee4288e83e54b4a39a88bf6,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725991258360296243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b,PodSandboxId:4e63c4c6bb84c930ce7667a861a21547562e48664bc266dca4e266f01c7d39cc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725991258478868951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2585fff
0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201,PodSandboxId:ad86a56a2ec675678c65c8c1f31d87acfcaf5bb0213a5163f0c39e813506ef17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725991258314903492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1780eab1fe91d63193dad7c68cc1
e73d7dca8ee77fb82736e04e7d94764d9a,PodSandboxId:8a25054429ab3fcfe2f73f86024c9d64933a9fd4c9b3569ce27fd03063451bc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991258191376223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725991258127622656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d,PodSandboxId:0ba1ce38894b1c79a7f8588fb5f29b8949fe2a4d8ef3540d3da58ab2e4701a14,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725991257947162806,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725991257951224240,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc,PodSandboxId:0d47a98eb3112a7fef1b7583c513f3ae9adff0eadaa82f636d7862596d2eac7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991252749804505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725990770310858417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990640053719525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990639993359914,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725990628186559343,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725990625854761467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725990614322273529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725990614273365582,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57873304-ad40-4000-8f23-d57b803ab01f name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.966956481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0db8ce2b-a12b-4af6-93e8-7013e080270a name=/runtime.v1.RuntimeService/Version
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.967047839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0db8ce2b-a12b-4af6-93e8-7013e080270a name=/runtime.v1.RuntimeService/Version
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.968529506Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fdd4373-8108-4412-81c7-0eabeff3e6b9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.968983861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991423968959493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fdd4373-8108-4412-81c7-0eabeff3e6b9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.969610100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e671a65-dc77-474c-a92f-75919cf60477 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.969681751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e671a65-dc77-474c-a92f-75919cf60477 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:43 ha-558946 crio[3620]: time="2024-09-10 18:03:43.970144291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6b2ffc5626f3775e9fb373cfb1b3350651ba8735b9190ab8dda1f3dbe9f1a30,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725991313294806034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725991299321524381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725991299307616134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93ca6fe5c1facf9e80ebe158beabecc3d1a8c32be0c454e3c89a52fa894422,PodSandboxId:6b30967ed8560b2e42d3fbb805e0e4872e4cefd3ac86bfe9d100580327591709,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725991291620475922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10efff272db119d6c6a88edf63354fef6d528a7cadb208799e802d1e8affa0b7,PodSandboxId:34b147bddaf58b77c20fb3fdd19c44187b9cbd620c70e7fa7178429411bf12ae,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725991273167566246,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1dc86399c32e0e26e2d6ddcbf3bc74,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71,PodSandboxId:a619eb4fb7dfbdd628ddf6c50657882a796c6ff57fdb8afae83432d53d414f6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725991258474946983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:7eeb82852ac4491fb60767bb4952b2f14f7290f33ee4288e83e54b4a39a88bf6,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725991258360296243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b,PodSandboxId:4e63c4c6bb84c930ce7667a861a21547562e48664bc266dca4e266f01c7d39cc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725991258478868951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2585fff
0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201,PodSandboxId:ad86a56a2ec675678c65c8c1f31d87acfcaf5bb0213a5163f0c39e813506ef17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725991258314903492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1780eab1fe91d63193dad7c68cc1
e73d7dca8ee77fb82736e04e7d94764d9a,PodSandboxId:8a25054429ab3fcfe2f73f86024c9d64933a9fd4c9b3569ce27fd03063451bc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991258191376223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725991258127622656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d,PodSandboxId:0ba1ce38894b1c79a7f8588fb5f29b8949fe2a4d8ef3540d3da58ab2e4701a14,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725991257947162806,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725991257951224240,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc,PodSandboxId:0d47a98eb3112a7fef1b7583c513f3ae9adff0eadaa82f636d7862596d2eac7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991252749804505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725990770310858417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990640053719525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990639993359914,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725990628186559343,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725990625854761467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725990614322273529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725990614273365582,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e671a65-dc77-474c-a92f-75919cf60477 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.021248829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75cf9e9a-40d7-497a-90e7-d0eddf645434 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.021378356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75cf9e9a-40d7-497a-90e7-d0eddf645434 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.022602852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0df991cc-ded4-4ba1-9593-49730bc8938d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.023028928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991424023009136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0df991cc-ded4-4ba1-9593-49730bc8938d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.023646771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7eb63228-7888-4913-8097-1dff9fbf9fb4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.023784111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7eb63228-7888-4913-8097-1dff9fbf9fb4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.024269165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6b2ffc5626f3775e9fb373cfb1b3350651ba8735b9190ab8dda1f3dbe9f1a30,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725991313294806034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725991299321524381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725991299307616134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93ca6fe5c1facf9e80ebe158beabecc3d1a8c32be0c454e3c89a52fa894422,PodSandboxId:6b30967ed8560b2e42d3fbb805e0e4872e4cefd3ac86bfe9d100580327591709,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725991291620475922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10efff272db119d6c6a88edf63354fef6d528a7cadb208799e802d1e8affa0b7,PodSandboxId:34b147bddaf58b77c20fb3fdd19c44187b9cbd620c70e7fa7178429411bf12ae,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725991273167566246,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1dc86399c32e0e26e2d6ddcbf3bc74,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71,PodSandboxId:a619eb4fb7dfbdd628ddf6c50657882a796c6ff57fdb8afae83432d53d414f6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725991258474946983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:7eeb82852ac4491fb60767bb4952b2f14f7290f33ee4288e83e54b4a39a88bf6,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725991258360296243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b,PodSandboxId:4e63c4c6bb84c930ce7667a861a21547562e48664bc266dca4e266f01c7d39cc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725991258478868951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2585fff
0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201,PodSandboxId:ad86a56a2ec675678c65c8c1f31d87acfcaf5bb0213a5163f0c39e813506ef17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725991258314903492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1780eab1fe91d63193dad7c68cc1
e73d7dca8ee77fb82736e04e7d94764d9a,PodSandboxId:8a25054429ab3fcfe2f73f86024c9d64933a9fd4c9b3569ce27fd03063451bc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991258191376223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725991258127622656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d,PodSandboxId:0ba1ce38894b1c79a7f8588fb5f29b8949fe2a4d8ef3540d3da58ab2e4701a14,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725991257947162806,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725991257951224240,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc,PodSandboxId:0d47a98eb3112a7fef1b7583c513f3ae9adff0eadaa82f636d7862596d2eac7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991252749804505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725990770310858417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990640053719525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990639993359914,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725990628186559343,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725990625854761467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725990614322273529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725990614273365582,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7eb63228-7888-4913-8097-1dff9fbf9fb4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.067676738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe7e85d5-811e-4273-9fbe-70330cbb894c name=/runtime.v1.RuntimeService/Version
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.067779535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe7e85d5-811e-4273-9fbe-70330cbb894c name=/runtime.v1.RuntimeService/Version
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.070010655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd36ed03-397d-48c4-aa43-c5729d51005a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.070544535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991424070522632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd36ed03-397d-48c4-aa43-c5729d51005a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.071299578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8e76122-ba32-4752-89a9-0304b4f0aeea name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.071384085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8e76122-ba32-4752-89a9-0304b4f0aeea name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:03:44 ha-558946 crio[3620]: time="2024-09-10 18:03:44.071980846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6b2ffc5626f3775e9fb373cfb1b3350651ba8735b9190ab8dda1f3dbe9f1a30,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725991313294806034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725991299321524381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725991299307616134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93ca6fe5c1facf9e80ebe158beabecc3d1a8c32be0c454e3c89a52fa894422,PodSandboxId:6b30967ed8560b2e42d3fbb805e0e4872e4cefd3ac86bfe9d100580327591709,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725991291620475922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10efff272db119d6c6a88edf63354fef6d528a7cadb208799e802d1e8affa0b7,PodSandboxId:34b147bddaf58b77c20fb3fdd19c44187b9cbd620c70e7fa7178429411bf12ae,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725991273167566246,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1dc86399c32e0e26e2d6ddcbf3bc74,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71,PodSandboxId:a619eb4fb7dfbdd628ddf6c50657882a796c6ff57fdb8afae83432d53d414f6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725991258474946983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:7eeb82852ac4491fb60767bb4952b2f14f7290f33ee4288e83e54b4a39a88bf6,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725991258360296243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b,PodSandboxId:4e63c4c6bb84c930ce7667a861a21547562e48664bc266dca4e266f01c7d39cc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725991258478868951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2585fff
0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201,PodSandboxId:ad86a56a2ec675678c65c8c1f31d87acfcaf5bb0213a5163f0c39e813506ef17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725991258314903492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1780eab1fe91d63193dad7c68cc1
e73d7dca8ee77fb82736e04e7d94764d9a,PodSandboxId:8a25054429ab3fcfe2f73f86024c9d64933a9fd4c9b3569ce27fd03063451bc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991258191376223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725991258127622656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d,PodSandboxId:0ba1ce38894b1c79a7f8588fb5f29b8949fe2a4d8ef3540d3da58ab2e4701a14,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725991257947162806,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725991257951224240,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc,PodSandboxId:0d47a98eb3112a7fef1b7583c513f3ae9adff0eadaa82f636d7862596d2eac7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991252749804505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725990770310858417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990640053719525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990639993359914,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725990628186559343,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725990625854761467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725990614322273529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725990614273365582,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8e76122-ba32-4752-89a9-0304b4f0aeea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d6b2ffc5626f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   682e75d7e519a       storage-provisioner
	4d16d6af2ae8b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Running             kube-controller-manager   2                   d32b89c1b7c33       kube-controller-manager-ha-558946
	2173425b282f1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   62cb63018e310       kube-apiserver-ha-558946
	fc93ca6fe5c1f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   6b30967ed8560       busybox-7dff88458-2t4ms
	10efff272db11       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   34b147bddaf58       kube-vip-ha-558946
	b47c7cf7abfab       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   4e63c4c6bb84c       kindnet-n8n67
	b8b6f7dc0df38       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   a619eb4fb7dfb       kube-proxy-gjqzx
	7eeb82852ac44       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   682e75d7e519a       storage-provisioner
	fd2585fff0689       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   ad86a56a2ec67       kube-scheduler-ha-558946
	7d1780eab1fe9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   8a25054429ab3       coredns-6f6b679f8f-5pv7s
	46aa5a70ba5d0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   d32b89c1b7c33       kube-controller-manager-ha-558946
	14554600b638e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   62cb63018e310       kube-apiserver-ha-558946
	bf78d03f37b8f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   0ba1ce38894b1       etcd-ha-558946
	186e126d69c5b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0d47a98eb3112       coredns-6f6b679f8f-fmcmc
	7f35f5f9c0297       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   4704ca681891e       busybox-7dff88458-2t4ms
	142a15832796a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   1c4e9776e0278       coredns-6f6b679f8f-5pv7s
	6899c9efcedba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   434931d96929c       coredns-6f6b679f8f-fmcmc
	e119a0b88cc46       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   70857c92d854f       kindnet-n8n67
	1668374a3d17c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   718077b7bfae6       kube-proxy-gjqzx
	edfccb881d415       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago       Exited              kube-scheduler            0                   8c5d88f2921ad       kube-scheduler-ha-558946
	5ebc6afb00309       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   ca3c0af433ced       etcd-ha-558946
	
	
	==> coredns [142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557] <==
	[INFO] 10.244.2.2:55393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185738s
	[INFO] 10.244.2.2:37830 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000216713s
	[INFO] 10.244.2.2:45453 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139889s
	[INFO] 10.244.1.2:46063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168728s
	[INFO] 10.244.1.2:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116561s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1850&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1849&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1850&timeout=5m6s&timeoutSeconds=306&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1093767674]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.683) (total time: 11867ms):
	Trace[1093767674]: ---"Objects listed" error:Unauthorized 11867ms (17:59:17.550)
	Trace[1093767674]: [11.867351345s] [11.867351345s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[82884957]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.207) (total time: 12343ms):
	Trace[82884957]: ---"Objects listed" error:Unauthorized 12343ms (17:59:17.551)
	Trace[82884957]: [12.343941336s] [12.343941336s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1261403297]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.085) (total time: 12465ms):
	Trace[1261403297]: ---"Objects listed" error:Unauthorized 12465ms (17:59:17.551)
	Trace[1261403297]: [12.465997099s] [12.465997099s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc] <==
	[INFO] plugin/kubernetes: Trace[2113675064]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:00:59.236) (total time: 10002ms):
	Trace[2113675064]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:01:09.238)
	Trace[2113675064]: [10.002225664s] [10.002225664s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[50871242]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:01:08.235) (total time: 10001ms):
	Trace[50871242]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:01:18.236)
	Trace[50871242]: [10.001973146s] [10.001973146s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41410->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1060531173]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:01:11.790) (total time: 11819ms):
	Trace[1060531173]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41410->10.96.0.1:443: read: connection reset by peer 11818ms (18:01:23.608)
	Trace[1060531173]: [11.819130281s] [11.819130281s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41410->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8] <==
	[INFO] 10.244.0.4:34074 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013369s
	[INFO] 10.244.0.4:34879 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107543s
	[INFO] 10.244.2.2:60365 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000255288s
	[INFO] 10.244.1.2:49914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123225s
	[INFO] 10.244.1.2:59420 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122155s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1850&timeout=5m10s&timeoutSeconds=310&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1826&timeout=8m56s&timeoutSeconds=536&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1822&timeout=6m45s&timeoutSeconds=405&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[496628189]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.450) (total time: 12099ms):
	Trace[496628189]: ---"Objects listed" error:Unauthorized 12099ms (17:59:17.550)
	Trace[496628189]: [12.099830739s] [12.099830739s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1525756423]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.525) (total time: 12025ms):
	Trace[1525756423]: ---"Objects listed" error:Unauthorized 12025ms (17:59:17.550)
	Trace[1525756423]: [12.025488262s] [12.025488262s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1080527706]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.149) (total time: 12402ms):
	Trace[1080527706]: ---"Objects listed" error:Unauthorized 12402ms (17:59:17.551)
	Trace[1080527706]: [12.402937793s] [12.402937793s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7d1780eab1fe91d63193dad7c68cc1e73d7dca8ee77fb82736e04e7d94764d9a] <==
	Trace[1492769942]: [13.832963258s] [13.832963258s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60524->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60520->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1467704825]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:01:09.764) (total time: 13844ms):
	Trace[1467704825]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60520->10.96.0.1:443: read: connection reset by peer 13844ms (18:01:23.609)
	Trace[1467704825]: [13.844307192s] [13.844307192s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60520->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-558946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_50_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:50:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:03:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:01:47 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:01:47 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:01:47 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:01:47 +0000   Tue, 10 Sep 2024 17:50:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-558946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6888e6da1bdd45dda1c087615a5c1996
	  System UUID:                6888e6da-1bdd-45dd-a1c0-87615a5c1996
	  Boot ID:                    a2579398-c9ae-48e0-a407-b08542361a94
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2t4ms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-5pv7s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-fmcmc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-558946                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-n8n67                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-558946             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-558946    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-gjqzx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-558946             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-558946                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m5s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-558946 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-558946 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-558946 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-558946 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-558946 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-558946 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           13m                    node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-558946 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Warning  ContainerGCFailed        3m24s (x2 over 4m24s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m10s (x2 over 3m35s)  kubelet          Node ha-558946 status is now: NodeNotReady
	  Normal   RegisteredNode           2m9s                   node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal   RegisteredNode           2m                     node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal   RegisteredNode           38s                    node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	
	
	Name:               ha-558946-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_51_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:51:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:03:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:02:22 +0000   Tue, 10 Sep 2024 18:01:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:02:22 +0000   Tue, 10 Sep 2024 18:01:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:02:22 +0000   Tue, 10 Sep 2024 18:01:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:02:22 +0000   Tue, 10 Sep 2024 18:01:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-558946-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db1a36bf29714274bd4e3db4349b13e5
	  System UUID:                db1a36bf-2971-4274-bd4e-3db4349b13e5
	  Boot ID:                    b212953d-76e1-4d89-8b39-baac7eb29a58
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xnl8m                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-558946-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-sfr7m                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-558946-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-558946-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xggtm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-558946-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-558946-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-558946-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-558946-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-558946-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  NodeNotReady             9m9s                   node-controller  Node ha-558946-m02 status is now: NodeNotReady
	  Normal  Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node ha-558946-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node ha-558946-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m30s (x7 over 2m30s)  kubelet          Node ha-558946-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           2m                     node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           38s                    node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	
	
	Name:               ha-558946-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_52_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:52:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:03:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:03:15 +0000   Tue, 10 Sep 2024 18:02:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:03:15 +0000   Tue, 10 Sep 2024 18:02:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:03:15 +0000   Tue, 10 Sep 2024 18:02:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:03:15 +0000   Tue, 10 Sep 2024 18:02:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-558946-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bf15e91753540d5b2e0f1553e9cfa68
	  System UUID:                8bf15e91-7535-40d5-b2e0-f1553e9cfa68
	  Boot ID:                    3e7d7243-81f6-4ea6-9a3a-6156382232ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-szkr7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-558946-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-mshf2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-558946-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-558946-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-8ldlx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-558946-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-558946-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-558946-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-558946-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-558946-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	  Normal   RegisteredNode           2m9s               node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	  Normal   RegisteredNode           2m                 node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	  Normal   NodeNotReady             89s                node-controller  Node ha-558946-m03 status is now: NodeNotReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 60s (x2 over 60s)  kubelet          Node ha-558946-m03 has been rebooted, boot id: 3e7d7243-81f6-4ea6-9a3a-6156382232ac
	  Normal   NodeHasSufficientMemory  60s (x3 over 60s)  kubelet          Node ha-558946-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x3 over 60s)  kubelet          Node ha-558946-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x3 over 60s)  kubelet          Node ha-558946-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             60s                kubelet          Node ha-558946-m03 status is now: NodeNotReady
	  Normal   NodeReady                60s                kubelet          Node ha-558946-m03 status is now: NodeReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-558946-m03 event: Registered Node ha-558946-m03 in Controller
	
	
	Name:               ha-558946-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_53_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:53:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:03:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:03:36 +0000   Tue, 10 Sep 2024 18:03:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:03:36 +0000   Tue, 10 Sep 2024 18:03:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:03:36 +0000   Tue, 10 Sep 2024 18:03:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:03:36 +0000   Tue, 10 Sep 2024 18:03:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-558946-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aded0f54a0334cb59bab04e35bcf99b0
	  System UUID:                aded0f54-a033-4cb5-9bab-04e35bcf99b0
	  Boot ID:                    7cb07829-d4bc-4530-a664-dcc19ff07df6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7kzcw       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-mk6xt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-558946-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-558946-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-558946-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-558946-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m9s               node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   RegisteredNode           2m                 node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   NodeNotReady             89s                node-controller  Node ha-558946-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s (x3 over 8s)    kubelet          Node ha-558946-m04 has been rebooted, boot id: 7cb07829-d4bc-4530-a664-dcc19ff07df6
	  Normal   NodeHasSufficientMemory  8s (x4 over 8s)    kubelet          Node ha-558946-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x4 over 8s)    kubelet          Node ha-558946-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x4 over 8s)    kubelet          Node ha-558946-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             8s                 kubelet          Node ha-558946-m04 status is now: NodeNotReady
	  Normal   NodeReady                8s (x2 over 8s)    kubelet          Node ha-558946-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep10 17:50] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.058035] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055902] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.190997] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.121180] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.267314] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.918739] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.478653] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.062428] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.320707] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.078655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.553971] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.155608] kauditd_printk_skb: 38 callbacks suppressed
	[Sep10 17:51] kauditd_printk_skb: 24 callbacks suppressed
	[Sep10 17:57] kauditd_printk_skb: 1 callbacks suppressed
	[Sep10 18:00] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.160317] systemd-fstab-generator[3557]: Ignoring "noauto" option for root device
	[  +0.176138] systemd-fstab-generator[3571]: Ignoring "noauto" option for root device
	[  +0.150443] systemd-fstab-generator[3583]: Ignoring "noauto" option for root device
	[  +0.300541] systemd-fstab-generator[3611]: Ignoring "noauto" option for root device
	[  +0.740504] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +5.938933] kauditd_printk_skb: 132 callbacks suppressed
	[Sep10 18:01] kauditd_printk_skb: 75 callbacks suppressed
	[ +50.631857] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa] <==
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-10T17:59:18.959485Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T17:59:18.959529Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-10T17:59:18.960911Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"22872ffef731375a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-10T17:59:18.961038Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961232Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961299Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961470Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961528Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961579Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961604Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961612Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961620Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961652Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961719Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961760Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961803Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961830Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.964739Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2024-09-10T17:59:18.964826Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2024-09-10T17:59:18.964835Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-558946","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	
	
	==> etcd [bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d] <==
	{"level":"warn","ts":"2024-09-10T18:02:48.780482Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"d8fe3a58642295be","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-10T18:02:48.780545Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d8fe3a58642295be","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-10T18:02:48.930433Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d8fe3a58642295be","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-10T18:02:48.930563Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d8fe3a58642295be","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-10T18:02:52.783141Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"d8fe3a58642295be","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-10T18:02:52.783311Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d8fe3a58642295be","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-10T18:02:53.931639Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d8fe3a58642295be","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-10T18:02:53.931828Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d8fe3a58642295be","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-10T18:02:56.785585Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"d8fe3a58642295be","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-10T18:02:56.785642Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d8fe3a58642295be","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-10T18:02:57.237146Z","caller":"traceutil/trace.go:171","msg":"trace[463861119] transaction","detail":"{read_only:false; response_revision:2410; number_of_response:1; }","duration":"119.524151ms","start":"2024-09-10T18:02:57.117598Z","end":"2024-09-10T18:02:57.237122Z","steps":["trace[463861119] 'process raft request'  (duration: 119.341492ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T18:02:57.909696Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:02:57.925917Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:02:57.926255Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:02:57.938592Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"22872ffef731375a","to":"d8fe3a58642295be","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-10T18:02:57.938759Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:02:57.944250Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"22872ffef731375a","to":"d8fe3a58642295be","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-10T18:02:57.944321Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"warn","ts":"2024-09-10T18:02:59.431185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.602311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-10T18:02:59.431440Z","caller":"traceutil/trace.go:171","msg":"trace[1409820815] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:2417; }","duration":"112.985421ms","start":"2024-09-10T18:02:59.318432Z","end":"2024-09-10T18:02:59.431418Z","steps":["trace[1409820815] 'count revisions from in-memory index tree'  (duration: 99.726321ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T18:03:39.635215Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"d8fe3a58642295be","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"32.721881ms"}
	{"level":"warn","ts":"2024-09-10T18:03:39.635309Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"7f0112a792d03c41","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"32.821809ms"}
	{"level":"info","ts":"2024-09-10T18:03:39.636906Z","caller":"traceutil/trace.go:171","msg":"trace[1720712081] linearizableReadLoop","detail":"{readStateIndex:2988; appliedIndex:2988; }","duration":"110.877238ms","start":"2024-09-10T18:03:39.526002Z","end":"2024-09-10T18:03:39.636880Z","steps":["trace[1720712081] 'read index received'  (duration: 110.871919ms)","trace[1720712081] 'applied index is now lower than readState.Index'  (duration: 3.881µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T18:03:39.637276Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.248194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-8ldlx\" ","response":"range_response_count:1 size:4870"}
	{"level":"info","ts":"2024-09-10T18:03:39.637386Z","caller":"traceutil/trace.go:171","msg":"trace[870820715] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-8ldlx; range_end:; response_count:1; response_revision:2577; }","duration":"111.375877ms","start":"2024-09-10T18:03:39.525998Z","end":"2024-09-10T18:03:39.637374Z","steps":["trace[870820715] 'agreement among raft nodes before linearized reading'  (duration: 111.148455ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:03:44 up 14 min,  0 users,  load average: 0.45, 0.55, 0.36
	Linux ha-558946 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b] <==
	I0910 18:03:09.551731       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 18:03:19.549486       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 18:03:19.549652       1 main.go:299] handling current node
	I0910 18:03:19.549710       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 18:03:19.549735       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 18:03:19.549953       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 18:03:19.550007       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 18:03:19.550199       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 18:03:19.550255       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 18:03:29.557838       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 18:03:29.557918       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 18:03:29.558255       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 18:03:29.558308       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 18:03:29.558410       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 18:03:29.558444       1 main.go:299] handling current node
	I0910 18:03:29.558471       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 18:03:29.558479       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 18:03:39.551594       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 18:03:39.551712       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 18:03:39.551869       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 18:03:39.551893       1 main.go:299] handling current node
	I0910 18:03:39.551921       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 18:03:39.551953       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 18:03:39.552013       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 18:03:39.552031       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d] <==
	I0910 17:58:39.330420       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:58:49.337684       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:58:49.337777       1 main.go:299] handling current node
	I0910 17:58:49.337804       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:58:49.337821       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:58:49.337959       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:58:49.337992       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:58:49.338133       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:58:49.338163       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:58:59.331635       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:58:59.331749       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:58:59.331930       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:58:59.331954       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:58:59.332013       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:58:59.332030       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:58:59.332238       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:58:59.332275       1 main.go:299] handling current node
	I0910 17:59:09.338839       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:59:09.338937       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:59:09.339183       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:59:09.339227       1 main.go:299] handling current node
	I0910 17:59:09.339255       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:59:09.339272       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:59:09.339350       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:59:09.339371       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4] <==
	I0910 18:00:58.526798       1 options.go:228] external host was not specified, using 192.168.39.109
	I0910 18:00:58.548464       1 server.go:142] Version: v1.31.0
	I0910 18:00:58.548522       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:00:59.721187       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0910 18:00:59.738174       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:00:59.744011       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0910 18:00:59.746152       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0910 18:00:59.746511       1 instance.go:232] Using reconciler: lease
	W0910 18:01:19.718576       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0910 18:01:19.718576       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0910 18:01:19.748416       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0910 18:01:19.748419       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318] <==
	I0910 18:01:41.476110       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0910 18:01:41.476156       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0910 18:01:41.545799       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:01:41.545886       1 policy_source.go:224] refreshing policies
	I0910 18:01:41.564481       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0910 18:01:41.564574       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 18:01:41.564651       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 18:01:41.564910       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 18:01:41.565045       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 18:01:41.565297       1 shared_informer.go:320] Caches are synced for configmaps
	I0910 18:01:41.567213       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 18:01:41.577313       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0910 18:01:41.577365       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0910 18:01:41.577375       1 aggregator.go:171] initial CRD sync complete...
	I0910 18:01:41.577390       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 18:01:41.577395       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 18:01:41.577399       1 cache.go:39] Caches are synced for autoregister controller
	I0910 18:01:41.578179       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0910 18:01:41.579584       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.241 192.168.39.96]
	I0910 18:01:41.582346       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:01:41.595012       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0910 18:01:41.598476       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0910 18:01:41.642265       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 18:01:42.475444       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0910 18:01:43.016195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.109 192.168.39.96]
	
	
	==> kube-controller-manager [46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b] <==
	I0910 18:00:59.225619       1 serving.go:386] Generated self-signed cert in-memory
	I0910 18:00:59.847672       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0910 18:00:59.847707       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:00:59.849417       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0910 18:00:59.849598       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 18:00:59.849815       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0910 18:00:59.850183       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0910 18:01:20.753987       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.109:8443/healthz\": dial tcp 192.168.39.109:8443: connect: connection refused"
	
	
	==> kube-controller-manager [4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339] <==
	I0910 18:02:09.768200       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8442f6c6-caf3-440c-a291-7a230940da95", APIVersion:"v1", ResourceVersion:"290", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qgm9s EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qgm9s": the object has been modified; please apply your changes to the latest version and try again
	I0910 18:02:15.844211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m03"
	I0910 18:02:15.844862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:02:15.864183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:02:15.883682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m03"
	I0910 18:02:16.087543       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.645749ms"
	I0910 18:02:16.088177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="155.48µs"
	I0910 18:02:19.913693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m03"
	I0910 18:02:21.116565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m03"
	I0910 18:02:22.780907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m02"
	I0910 18:02:29.989231       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:02:31.195788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:02:44.754250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m03"
	I0910 18:02:44.774563       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m03"
	I0910 18:02:44.893775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m03"
	I0910 18:02:45.597939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.297µs"
	I0910 18:03:05.036226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.264728ms"
	I0910 18:03:05.036348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.29µs"
	I0910 18:03:06.148774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:03:06.248226       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:03:15.226815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m03"
	I0910 18:03:36.314638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:03:36.314751       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-558946-m04"
	I0910 18:03:36.336959       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:03:39.917463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	
	
	==> kube-proxy [1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c] <==
	E0910 17:58:14.874562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:17.946953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:17.947032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:17.947174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:17.947219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:21.017113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:21.017195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:27.160574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:27.160656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:27.160752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:27.160776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:27.160860       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:27.160875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:36.380413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:36.380539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:39.448917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:39.449286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:39.449646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:39.449717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:54.809699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:54.809889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:59:04.026187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:59:04.026290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:59:07.097985       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:59:07.098108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:01:00.761347       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0910 18:01:03.832710       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0910 18:01:06.904625       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0910 18:01:13.048586       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0910 18:01:22.265672       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0910 18:01:38.755957       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.109"]
	E0910 18:01:38.756170       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:01:38.814039       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:01:38.814132       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:01:38.814198       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:01:38.817273       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:01:38.818657       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:01:38.818710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:01:38.820947       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:01:38.821049       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:01:38.821598       1 config.go:197] "Starting service config controller"
	I0910 18:01:38.821621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:01:38.821949       1 config.go:326] "Starting node config controller"
	I0910 18:01:38.821990       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:01:38.922373       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:01:38.922374       1 shared_informer.go:320] Caches are synced for node config
	I0910 18:01:38.922414       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc] <==
	E0910 17:50:18.783701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 17:50:20.762780       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 17:53:20.783017       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7kzcw\": pod kindnet-7kzcw is already assigned to node \"ha-558946-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7kzcw" node="ha-558946-m04"
	E0910 17:53:20.783217       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a925295e-bc22-4154-850e-79962508c7ac(kube-system/kindnet-7kzcw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7kzcw"
	E0910 17:53:20.783245       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7kzcw\": pod kindnet-7kzcw is already assigned to node \"ha-558946-m04\"" pod="kube-system/kindnet-7kzcw"
	I0910 17:53:20.783283       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7kzcw" node="ha-558946-m04"
	E0910 17:53:20.926971       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9xbp8\": pod kindnet-9xbp8 is already assigned to node \"ha-558946-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9xbp8" node="ha-558946-m04"
	E0910 17:53:20.927165       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d228e8b7-bd1d-442c-bf6a-2240d8c2ac04(kube-system/kindnet-9xbp8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9xbp8"
	E0910 17:53:20.927360       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9xbp8\": pod kindnet-9xbp8 is already assigned to node \"ha-558946-m04\"" pod="kube-system/kindnet-9xbp8"
	I0910 17:53:20.927386       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9xbp8" node="ha-558946-m04"
	E0910 17:59:08.727812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0910 17:59:08.878739       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0910 17:59:09.755034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0910 17:59:11.234316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0910 17:59:11.530993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0910 17:59:11.724899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0910 17:59:11.942038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0910 17:59:12.292205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0910 17:59:15.354875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0910 17:59:16.082128       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0910 17:59:16.218485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0910 17:59:17.123378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0910 17:59:17.853287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0910 17:59:18.611800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0910 17:59:18.890951       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fd2585fff0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201] <==
	W0910 18:01:36.280377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.109:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:36.280439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.109:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:36.438321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.109:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:36.438440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.109:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.050281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.109:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.050427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.109:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.250475       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.109:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.250603       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.109:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.464329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.109:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.464395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.109:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.698298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.109:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.698356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.109:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.816544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.109:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.816651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.109:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:38.278546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.109:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:38.278604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.109:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:38.897899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.109:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:38.898142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.109:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:38.943024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.109:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:38.943250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.109:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:41.487824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 18:01:41.487921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:01:41.488023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 18:01:41.488106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 18:01:51.382897       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 18:02:20 ha-558946 kubelet[1318]: E0910 18:02:20.490460    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991340490199028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:02:25 ha-558946 kubelet[1318]: I0910 18:02:25.278257    1318 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-558946" podUID="810f85ef-6900-456e-877e-095d38286613"
	Sep 10 18:02:25 ha-558946 kubelet[1318]: I0910 18:02:25.300405    1318 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-558946"
	Sep 10 18:02:30 ha-558946 kubelet[1318]: I0910 18:02:30.302918    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-558946" podStartSLOduration=5.302880875 podStartE2EDuration="5.302880875s" podCreationTimestamp="2024-09-10 18:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-10 18:02:30.302843071 +0000 UTC m=+730.146161773" watchObservedRunningTime="2024-09-10 18:02:30.302880875 +0000 UTC m=+730.146199575"
	Sep 10 18:02:30 ha-558946 kubelet[1318]: E0910 18:02:30.494058    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991350493169650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:02:30 ha-558946 kubelet[1318]: E0910 18:02:30.494138    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991350493169650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:02:40 ha-558946 kubelet[1318]: E0910 18:02:40.495995    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991360495450225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:02:40 ha-558946 kubelet[1318]: E0910 18:02:40.496032    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991360495450225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:02:50 ha-558946 kubelet[1318]: E0910 18:02:50.498290    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991370497733264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:02:50 ha-558946 kubelet[1318]: E0910 18:02:50.498360    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991370497733264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:00 ha-558946 kubelet[1318]: E0910 18:03:00.505975    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991380504635621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:00 ha-558946 kubelet[1318]: E0910 18:03:00.506187    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991380504635621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:10 ha-558946 kubelet[1318]: E0910 18:03:10.507706    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991390507232247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:10 ha-558946 kubelet[1318]: E0910 18:03:10.507794    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991390507232247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:20 ha-558946 kubelet[1318]: E0910 18:03:20.297259    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:03:20 ha-558946 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:03:20 ha-558946 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:03:20 ha-558946 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:03:20 ha-558946 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:03:20 ha-558946 kubelet[1318]: E0910 18:03:20.511875    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991400510900918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:20 ha-558946 kubelet[1318]: E0910 18:03:20.512002    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991400510900918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:30 ha-558946 kubelet[1318]: E0910 18:03:30.514001    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991410513323099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:30 ha-558946 kubelet[1318]: E0910 18:03:30.514037    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991410513323099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:40 ha-558946 kubelet[1318]: E0910 18:03:40.525022    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991420524697747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:03:40 ha-558946 kubelet[1318]: E0910 18:03:40.526391    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991420524697747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:03:43.599625   32027 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19598-5973/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-558946 -n ha-558946
helpers_test.go:261: (dbg) Run:  kubectl --context ha-558946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (389.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 stop -v=7 --alsologtostderr: exit status 82 (2m0.449380575s)

                                                
                                                
-- stdout --
	* Stopping node "ha-558946-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:04:02.886053   32425 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:04:02.886433   32425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:04:02.886444   32425 out.go:358] Setting ErrFile to fd 2...
	I0910 18:04:02.886449   32425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:04:02.886615   32425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:04:02.886825   32425 out.go:352] Setting JSON to false
	I0910 18:04:02.886898   32425 mustload.go:65] Loading cluster: ha-558946
	I0910 18:04:02.887218   32425 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:04:02.887295   32425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 18:04:02.887454   32425 mustload.go:65] Loading cluster: ha-558946
	I0910 18:04:02.887576   32425 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:04:02.887596   32425 stop.go:39] StopHost: ha-558946-m04
	I0910 18:04:02.887903   32425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:04:02.887942   32425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:04:02.902332   32425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I0910 18:04:02.902692   32425 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:04:02.903254   32425 main.go:141] libmachine: Using API Version  1
	I0910 18:04:02.903279   32425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:04:02.903605   32425 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:04:02.906450   32425 out.go:177] * Stopping node "ha-558946-m04"  ...
	I0910 18:04:02.907649   32425 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0910 18:04:02.907681   32425 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 18:04:02.907906   32425 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0910 18:04:02.907928   32425 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 18:04:02.910754   32425 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 18:04:02.911140   32425 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 19:03:31 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 18:04:02.911162   32425 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 18:04:02.911338   32425 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 18:04:02.911503   32425 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 18:04:02.911633   32425 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 18:04:02.911775   32425 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	I0910 18:04:02.995808   32425 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0910 18:04:03.048651   32425 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0910 18:04:03.103007   32425 main.go:141] libmachine: Stopping "ha-558946-m04"...
	I0910 18:04:03.103038   32425 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 18:04:03.104521   32425 main.go:141] libmachine: (ha-558946-m04) Calling .Stop
	I0910 18:04:03.107793   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 0/120
	I0910 18:04:04.109669   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 1/120
	I0910 18:04:05.111617   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 2/120
	I0910 18:04:06.112949   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 3/120
	I0910 18:04:07.114541   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 4/120
	I0910 18:04:08.116464   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 5/120
	I0910 18:04:09.117778   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 6/120
	I0910 18:04:10.119058   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 7/120
	I0910 18:04:11.120278   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 8/120
	I0910 18:04:12.121596   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 9/120
	I0910 18:04:13.123475   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 10/120
	I0910 18:04:14.124850   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 11/120
	I0910 18:04:15.126155   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 12/120
	I0910 18:04:16.127443   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 13/120
	I0910 18:04:17.128649   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 14/120
	I0910 18:04:18.130465   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 15/120
	I0910 18:04:19.131681   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 16/120
	I0910 18:04:20.132962   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 17/120
	I0910 18:04:21.134283   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 18/120
	I0910 18:04:22.135392   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 19/120
	I0910 18:04:23.137409   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 20/120
	I0910 18:04:24.138688   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 21/120
	I0910 18:04:25.139853   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 22/120
	I0910 18:04:26.141344   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 23/120
	I0910 18:04:27.143024   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 24/120
	I0910 18:04:28.145006   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 25/120
	I0910 18:04:29.146341   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 26/120
	I0910 18:04:30.147665   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 27/120
	I0910 18:04:31.149124   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 28/120
	I0910 18:04:32.150714   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 29/120
	I0910 18:04:33.152513   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 30/120
	I0910 18:04:34.153945   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 31/120
	I0910 18:04:35.155549   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 32/120
	I0910 18:04:36.156885   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 33/120
	I0910 18:04:37.158406   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 34/120
	I0910 18:04:38.160197   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 35/120
	I0910 18:04:39.161522   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 36/120
	I0910 18:04:40.163629   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 37/120
	I0910 18:04:41.164960   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 38/120
	I0910 18:04:42.166376   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 39/120
	I0910 18:04:43.168370   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 40/120
	I0910 18:04:44.169797   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 41/120
	I0910 18:04:45.171371   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 42/120
	I0910 18:04:46.172806   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 43/120
	I0910 18:04:47.174397   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 44/120
	I0910 18:04:48.176219   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 45/120
	I0910 18:04:49.177404   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 46/120
	I0910 18:04:50.178666   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 47/120
	I0910 18:04:51.179910   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 48/120
	I0910 18:04:52.181282   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 49/120
	I0910 18:04:53.183357   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 50/120
	I0910 18:04:54.184639   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 51/120
	I0910 18:04:55.185931   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 52/120
	I0910 18:04:56.187460   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 53/120
	I0910 18:04:57.189047   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 54/120
	I0910 18:04:58.190730   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 55/120
	I0910 18:04:59.192081   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 56/120
	I0910 18:05:00.193591   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 57/120
	I0910 18:05:01.195608   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 58/120
	I0910 18:05:02.197044   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 59/120
	I0910 18:05:03.199034   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 60/120
	I0910 18:05:04.200531   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 61/120
	I0910 18:05:05.201757   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 62/120
	I0910 18:05:06.203039   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 63/120
	I0910 18:05:07.204290   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 64/120
	I0910 18:05:08.206175   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 65/120
	I0910 18:05:09.207368   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 66/120
	I0910 18:05:10.208716   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 67/120
	I0910 18:05:11.209909   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 68/120
	I0910 18:05:12.211221   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 69/120
	I0910 18:05:13.213143   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 70/120
	I0910 18:05:14.214380   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 71/120
	I0910 18:05:15.215822   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 72/120
	I0910 18:05:16.217005   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 73/120
	I0910 18:05:17.218354   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 74/120
	I0910 18:05:18.220216   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 75/120
	I0910 18:05:19.221749   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 76/120
	I0910 18:05:20.223495   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 77/120
	I0910 18:05:21.225731   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 78/120
	I0910 18:05:22.227643   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 79/120
	I0910 18:05:23.229404   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 80/120
	I0910 18:05:24.230627   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 81/120
	I0910 18:05:25.232090   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 82/120
	I0910 18:05:26.233644   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 83/120
	I0910 18:05:27.234938   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 84/120
	I0910 18:05:28.236853   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 85/120
	I0910 18:05:29.238221   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 86/120
	I0910 18:05:30.239812   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 87/120
	I0910 18:05:31.241120   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 88/120
	I0910 18:05:32.242578   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 89/120
	I0910 18:05:33.244139   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 90/120
	I0910 18:05:34.245557   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 91/120
	I0910 18:05:35.247527   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 92/120
	I0910 18:05:36.248790   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 93/120
	I0910 18:05:37.250177   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 94/120
	I0910 18:05:38.251978   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 95/120
	I0910 18:05:39.253269   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 96/120
	I0910 18:05:40.255673   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 97/120
	I0910 18:05:41.257087   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 98/120
	I0910 18:05:42.258242   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 99/120
	I0910 18:05:43.260515   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 100/120
	I0910 18:05:44.261693   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 101/120
	I0910 18:05:45.263108   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 102/120
	I0910 18:05:46.264395   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 103/120
	I0910 18:05:47.265678   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 104/120
	I0910 18:05:48.267020   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 105/120
	I0910 18:05:49.268624   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 106/120
	I0910 18:05:50.269925   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 107/120
	I0910 18:05:51.271394   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 108/120
	I0910 18:05:52.272660   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 109/120
	I0910 18:05:53.274347   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 110/120
	I0910 18:05:54.275787   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 111/120
	I0910 18:05:55.277138   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 112/120
	I0910 18:05:56.278493   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 113/120
	I0910 18:05:57.279790   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 114/120
	I0910 18:05:58.281674   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 115/120
	I0910 18:05:59.283113   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 116/120
	I0910 18:06:00.284454   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 117/120
	I0910 18:06:01.285930   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 118/120
	I0910 18:06:02.287279   32425 main.go:141] libmachine: (ha-558946-m04) Waiting for machine to stop 119/120
	I0910 18:06:03.288284   32425 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0910 18:06:03.288358   32425 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0910 18:06:03.290191   32425 out.go:201] 
	W0910 18:06:03.291400   32425 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0910 18:06:03.291418   32425 out.go:270] * 
	* 
	W0910 18:06:03.294397   32425 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:06:03.295571   32425 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-558946 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr: exit status 3 (19.008411696s)

                                                
                                                
-- stdout --
	ha-558946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558946-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:06:03.337675   32862 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:06:03.337955   32862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:06:03.337968   32862 out.go:358] Setting ErrFile to fd 2...
	I0910 18:06:03.337974   32862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:06:03.338248   32862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:06:03.338461   32862 out.go:352] Setting JSON to false
	I0910 18:06:03.338489   32862 mustload.go:65] Loading cluster: ha-558946
	I0910 18:06:03.338578   32862 notify.go:220] Checking for updates...
	I0910 18:06:03.338861   32862 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:06:03.338875   32862 status.go:255] checking status of ha-558946 ...
	I0910 18:06:03.339229   32862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:06:03.339288   32862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:06:03.358727   32862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0910 18:06:03.359137   32862 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:06:03.359748   32862 main.go:141] libmachine: Using API Version  1
	I0910 18:06:03.359775   32862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:06:03.360198   32862 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:06:03.360443   32862 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 18:06:03.362018   32862 status.go:330] ha-558946 host status = "Running" (err=<nil>)
	I0910 18:06:03.362039   32862 host.go:66] Checking if "ha-558946" exists ...
	I0910 18:06:03.362373   32862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:06:03.362407   32862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:06:03.377830   32862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0910 18:06:03.378215   32862 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:06:03.378673   32862 main.go:141] libmachine: Using API Version  1
	I0910 18:06:03.378692   32862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:06:03.379002   32862 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:06:03.379181   32862 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 18:06:03.381767   32862 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:06:03.382202   32862 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:06:03.382237   32862 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:06:03.382351   32862 host.go:66] Checking if "ha-558946" exists ...
	I0910 18:06:03.382650   32862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:06:03.382684   32862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:06:03.396637   32862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0910 18:06:03.397002   32862 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:06:03.397450   32862 main.go:141] libmachine: Using API Version  1
	I0910 18:06:03.397471   32862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:06:03.397792   32862 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:06:03.397982   32862 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:06:03.398138   32862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:06:03.398159   32862 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:06:03.401375   32862 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:06:03.402051   32862 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:06:03.402089   32862 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:06:03.402230   32862 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:06:03.402431   32862 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:06:03.402592   32862 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:06:03.402737   32862 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 18:06:03.485784   32862 ssh_runner.go:195] Run: systemctl --version
	I0910 18:06:03.491938   32862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:06:03.508534   32862 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 18:06:03.508565   32862 api_server.go:166] Checking apiserver status ...
	I0910 18:06:03.508598   32862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:06:03.523897   32862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4945/cgroup
	W0910 18:06:03.533858   32862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4945/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:06:03.533900   32862 ssh_runner.go:195] Run: ls
	I0910 18:06:03.539557   32862 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 18:06:03.544719   32862 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 18:06:03.544742   32862 status.go:422] ha-558946 apiserver status = Running (err=<nil>)
	I0910 18:06:03.544754   32862 status.go:257] ha-558946 status: &{Name:ha-558946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:06:03.544776   32862 status.go:255] checking status of ha-558946-m02 ...
	I0910 18:06:03.545104   32862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:06:03.545165   32862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:06:03.561575   32862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40411
	I0910 18:06:03.561919   32862 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:06:03.562368   32862 main.go:141] libmachine: Using API Version  1
	I0910 18:06:03.562388   32862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:06:03.562675   32862 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:06:03.562853   32862 main.go:141] libmachine: (ha-558946-m02) Calling .GetState
	I0910 18:06:03.564362   32862 status.go:330] ha-558946-m02 host status = "Running" (err=<nil>)
	I0910 18:06:03.564378   32862 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 18:06:03.564687   32862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:06:03.564736   32862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:06:03.578554   32862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I0910 18:06:03.578927   32862 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:06:03.579360   32862 main.go:141] libmachine: Using API Version  1
	I0910 18:06:03.579379   32862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:06:03.579647   32862 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:06:03.579794   32862 main.go:141] libmachine: (ha-558946-m02) Calling .GetIP
	I0910 18:06:03.582445   32862 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 18:06:03.582875   32862 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 19:01:03 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 18:06:03.582901   32862 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 18:06:03.583014   32862 host.go:66] Checking if "ha-558946-m02" exists ...
	I0910 18:06:03.583407   32862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:06:03.583460   32862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:06:03.597144   32862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I0910 18:06:03.597496   32862 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:06:03.597933   32862 main.go:141] libmachine: Using API Version  1
	I0910 18:06:03.597956   32862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:06:03.598270   32862 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:06:03.598451   32862 main.go:141] libmachine: (ha-558946-m02) Calling .DriverName
	I0910 18:06:03.598613   32862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:06:03.598640   32862 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHHostname
	I0910 18:06:03.601323   32862 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 18:06:03.601765   32862 main.go:141] libmachine: (ha-558946-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:52:22", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 19:01:03 +0000 UTC Type:0 Mac:52:54:00:68:52:22 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-558946-m02 Clientid:01:52:54:00:68:52:22}
	I0910 18:06:03.601790   32862 main.go:141] libmachine: (ha-558946-m02) DBG | domain ha-558946-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:68:52:22 in network mk-ha-558946
	I0910 18:06:03.601926   32862 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHPort
	I0910 18:06:03.602077   32862 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHKeyPath
	I0910 18:06:03.602239   32862 main.go:141] libmachine: (ha-558946-m02) Calling .GetSSHUsername
	I0910 18:06:03.602362   32862 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m02/id_rsa Username:docker}
	I0910 18:06:03.682155   32862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:06:03.699634   32862 kubeconfig.go:125] found "ha-558946" server: "https://192.168.39.254:8443"
	I0910 18:06:03.699660   32862 api_server.go:166] Checking apiserver status ...
	I0910 18:06:03.699696   32862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:06:03.720823   32862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0910 18:06:03.736344   32862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:06:03.736404   32862 ssh_runner.go:195] Run: ls
	I0910 18:06:03.740595   32862 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 18:06:03.744894   32862 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 18:06:03.744915   32862 status.go:422] ha-558946-m02 apiserver status = Running (err=<nil>)
	I0910 18:06:03.744924   32862 status.go:257] ha-558946-m02 status: &{Name:ha-558946-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:06:03.744937   32862 status.go:255] checking status of ha-558946-m04 ...
	I0910 18:06:03.745291   32862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:06:03.745330   32862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:06:03.761841   32862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0910 18:06:03.762342   32862 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:06:03.762867   32862 main.go:141] libmachine: Using API Version  1
	I0910 18:06:03.762889   32862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:06:03.763188   32862 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:06:03.763350   32862 main.go:141] libmachine: (ha-558946-m04) Calling .GetState
	I0910 18:06:03.764992   32862 status.go:330] ha-558946-m04 host status = "Running" (err=<nil>)
	I0910 18:06:03.765007   32862 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 18:06:03.765446   32862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:06:03.765541   32862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:06:03.780887   32862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45571
	I0910 18:06:03.781283   32862 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:06:03.781851   32862 main.go:141] libmachine: Using API Version  1
	I0910 18:06:03.781877   32862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:06:03.782209   32862 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:06:03.782409   32862 main.go:141] libmachine: (ha-558946-m04) Calling .GetIP
	I0910 18:06:03.784863   32862 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 18:06:03.785251   32862 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 19:03:31 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 18:06:03.785279   32862 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 18:06:03.785427   32862 host.go:66] Checking if "ha-558946-m04" exists ...
	I0910 18:06:03.785812   32862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:06:03.785867   32862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:06:03.801099   32862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0910 18:06:03.801558   32862 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:06:03.802010   32862 main.go:141] libmachine: Using API Version  1
	I0910 18:06:03.802033   32862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:06:03.802362   32862 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:06:03.802556   32862 main.go:141] libmachine: (ha-558946-m04) Calling .DriverName
	I0910 18:06:03.802746   32862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:06:03.802768   32862 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHHostname
	I0910 18:06:03.805490   32862 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 18:06:03.805836   32862 main.go:141] libmachine: (ha-558946-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:bd:c1", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 19:03:31 +0000 UTC Type:0 Mac:52:54:00:3e:bd:c1 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-558946-m04 Clientid:01:52:54:00:3e:bd:c1}
	I0910 18:06:03.805868   32862 main.go:141] libmachine: (ha-558946-m04) DBG | domain ha-558946-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:3e:bd:c1 in network mk-ha-558946
	I0910 18:06:03.805971   32862 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHPort
	I0910 18:06:03.806119   32862 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHKeyPath
	I0910 18:06:03.806243   32862 main.go:141] libmachine: (ha-558946-m04) Calling .GetSSHUsername
	I0910 18:06:03.806381   32862 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946-m04/id_rsa Username:docker}
	W0910 18:06:22.305286   32862 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	W0910 18:06:22.305367   32862 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0910 18:06:22.305380   32862 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0910 18:06:22.305389   32862 status.go:257] ha-558946-m04 status: &{Name:ha-558946-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0910 18:06:22.305407   32862 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-558946 -n ha-558946
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-558946 logs -n 25: (1.643685574s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-558946 ssh -n ha-558946-m02 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04:/home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m04 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp testdata/cp-test.txt                                                | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1374294018/001/cp-test_ha-558946-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946:/home/docker/cp-test_ha-558946-m04_ha-558946.txt                       |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946 sudo cat                                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946.txt                                 |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m02:/home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m02 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m03:/home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n                                                                 | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | ha-558946-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-558946 ssh -n ha-558946-m03 sudo cat                                          | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC | 10 Sep 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-558946 node stop m02 -v=7                                                     | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-558946 node start m02 -v=7                                                    | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-558946 -v=7                                                           | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-558946 -v=7                                                                | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-558946 --wait=true -v=7                                                    | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 17:59 UTC | 10 Sep 24 18:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-558946                                                                | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 18:03 UTC |                     |
	| node    | ha-558946 node delete m03 -v=7                                                   | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 18:03 UTC | 10 Sep 24 18:04 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-558946 stop -v=7                                                              | ha-558946 | jenkins | v1.34.0 | 10 Sep 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:59:17
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:59:17.996107   30598 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:59:17.996381   30598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:59:17.996395   30598 out.go:358] Setting ErrFile to fd 2...
	I0910 17:59:17.996402   30598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:59:17.996571   30598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:59:17.997167   30598 out.go:352] Setting JSON to false
	I0910 17:59:17.998168   30598 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2510,"bootTime":1725988648,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:59:17.998251   30598 start.go:139] virtualization: kvm guest
	I0910 17:59:18.000603   30598 out.go:177] * [ha-558946] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:59:18.002223   30598 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:59:18.002225   30598 notify.go:220] Checking for updates...
	I0910 17:59:18.004661   30598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:59:18.005863   30598 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:59:18.006960   30598 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:59:18.008085   30598 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:59:18.009266   30598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:59:18.010749   30598 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:59:18.010834   30598 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:59:18.011225   30598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:59:18.011278   30598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:59:18.026475   30598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39219
	I0910 17:59:18.026869   30598 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:59:18.027526   30598 main.go:141] libmachine: Using API Version  1
	I0910 17:59:18.027552   30598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:59:18.027946   30598 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:59:18.028135   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:59:18.062430   30598 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 17:59:18.063510   30598 start.go:297] selected driver: kvm2
	I0910 17:59:18.063527   30598 start.go:901] validating driver "kvm2" against &{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:59:18.063712   30598 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:59:18.064056   30598 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:59:18.064131   30598 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:59:18.079062   30598 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:59:18.079759   30598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:59:18.079797   30598 cni.go:84] Creating CNI manager for ""
	I0910 17:59:18.079808   30598 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0910 17:59:18.079875   30598 start.go:340] cluster config:
	{Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:59:18.080038   30598 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:59:18.082493   30598 out.go:177] * Starting "ha-558946" primary control-plane node in "ha-558946" cluster
	I0910 17:59:18.083654   30598 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:59:18.083698   30598 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:59:18.083705   30598 cache.go:56] Caching tarball of preloaded images
	I0910 17:59:18.083795   30598 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 17:59:18.083812   30598 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:59:18.083929   30598 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/config.json ...
	I0910 17:59:18.084196   30598 start.go:360] acquireMachinesLock for ha-558946: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:59:18.084260   30598 start.go:364] duration metric: took 44.442µs to acquireMachinesLock for "ha-558946"
	I0910 17:59:18.084277   30598 start.go:96] Skipping create...Using existing machine configuration
	I0910 17:59:18.084288   30598 fix.go:54] fixHost starting: 
	I0910 17:59:18.084642   30598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:59:18.084681   30598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:59:18.098486   30598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0910 17:59:18.098911   30598 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:59:18.099415   30598 main.go:141] libmachine: Using API Version  1
	I0910 17:59:18.099439   30598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:59:18.099720   30598 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:59:18.099919   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:59:18.100061   30598 main.go:141] libmachine: (ha-558946) Calling .GetState
	I0910 17:59:18.101597   30598 fix.go:112] recreateIfNeeded on ha-558946: state=Running err=<nil>
	W0910 17:59:18.101619   30598 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 17:59:18.103365   30598 out.go:177] * Updating the running kvm2 "ha-558946" VM ...
	I0910 17:59:18.104475   30598 machine.go:93] provisionDockerMachine start ...
	I0910 17:59:18.104491   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 17:59:18.104693   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.107144   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.107654   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.107669   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.107926   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.108079   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.108224   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.108350   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.108511   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 17:59:18.108717   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:59:18.108729   30598 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 17:59:18.218106   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946
	
	I0910 17:59:18.218146   30598 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:59:18.218380   30598 buildroot.go:166] provisioning hostname "ha-558946"
	I0910 17:59:18.218406   30598 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:59:18.218611   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.221293   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.221639   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.221658   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.221794   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.221956   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.222113   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.222300   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.222455   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 17:59:18.222620   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:59:18.222631   30598 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-558946 && echo "ha-558946" | sudo tee /etc/hostname
	I0910 17:59:18.349873   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-558946
	
	I0910 17:59:18.349900   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.352462   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.352824   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.352851   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.352983   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.353176   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.353397   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.353587   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.353768   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 17:59:18.353958   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:59:18.353983   30598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-558946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-558946/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-558946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:59:18.461957   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:59:18.461987   30598 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 17:59:18.462018   30598 buildroot.go:174] setting up certificates
	I0910 17:59:18.462026   30598 provision.go:84] configureAuth start
	I0910 17:59:18.462037   30598 main.go:141] libmachine: (ha-558946) Calling .GetMachineName
	I0910 17:59:18.462326   30598 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 17:59:18.464679   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.465086   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.465112   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.465232   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.467334   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.467656   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.467681   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.467803   30598 provision.go:143] copyHostCerts
	I0910 17:59:18.467832   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:59:18.467884   30598 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 17:59:18.467894   30598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 17:59:18.467973   30598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 17:59:18.468073   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:59:18.468100   30598 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 17:59:18.468110   30598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 17:59:18.468150   30598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 17:59:18.468206   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:59:18.468230   30598 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 17:59:18.468239   30598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 17:59:18.468275   30598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 17:59:18.468349   30598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.ha-558946 san=[127.0.0.1 192.168.39.109 ha-558946 localhost minikube]
	I0910 17:59:18.599928   30598 provision.go:177] copyRemoteCerts
	I0910 17:59:18.599985   30598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:59:18.600004   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.602648   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.602959   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.602993   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.603179   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.603338   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.603499   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.603617   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 17:59:18.687689   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 17:59:18.687773   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 17:59:18.714065   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 17:59:18.714131   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0910 17:59:18.743470   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 17:59:18.743534   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 17:59:18.768716   30598 provision.go:87] duration metric: took 306.678576ms to configureAuth
	I0910 17:59:18.768736   30598 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:59:18.768923   30598 config.go:182] Loaded profile config "ha-558946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:59:18.768984   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 17:59:18.771487   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.771890   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 17:59:18.771928   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 17:59:18.772106   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 17:59:18.772318   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.772514   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 17:59:18.772654   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 17:59:18.772820   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 17:59:18.773012   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 17:59:18.773030   30598 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:00:49.524020   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:00:49.524048   30598 machine.go:96] duration metric: took 1m31.419560916s to provisionDockerMachine
	I0910 18:00:49.524061   30598 start.go:293] postStartSetup for "ha-558946" (driver="kvm2")
	I0910 18:00:49.524071   30598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:00:49.524085   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.524394   30598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:00:49.524419   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.527295   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.527764   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.527790   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.527931   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.528111   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.528263   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.528434   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 18:00:49.613129   30598 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:00:49.617146   30598 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:00:49.617164   30598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:00:49.617216   30598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:00:49.617283   30598 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:00:49.617293   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 18:00:49.617372   30598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:00:49.627023   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:00:49.651466   30598 start.go:296] duration metric: took 127.39527ms for postStartSetup
	I0910 18:00:49.651530   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.651802   30598 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0910 18:00:49.651828   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.654626   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.655000   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.655023   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.655199   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.655380   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.655549   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.655709   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	W0910 18:00:49.740067   30598 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0910 18:00:49.740093   30598 fix.go:56] duration metric: took 1m31.655811019s for fixHost
	I0910 18:00:49.740112   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.742739   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.743109   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.743137   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.743267   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.743470   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.743605   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.743700   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.743822   30598 main.go:141] libmachine: Using SSH client type: native
	I0910 18:00:49.743979   30598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0910 18:00:49.743990   30598 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:00:49.850374   30598 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725991249.805675171
	
	I0910 18:00:49.850399   30598 fix.go:216] guest clock: 1725991249.805675171
	I0910 18:00:49.850409   30598 fix.go:229] Guest: 2024-09-10 18:00:49.805675171 +0000 UTC Remote: 2024-09-10 18:00:49.740099817 +0000 UTC m=+91.778943016 (delta=65.575354ms)
	I0910 18:00:49.850433   30598 fix.go:200] guest clock delta is within tolerance: 65.575354ms
	I0910 18:00:49.850439   30598 start.go:83] releasing machines lock for "ha-558946", held for 1m31.766168686s
	I0910 18:00:49.850458   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.850726   30598 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 18:00:49.853352   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.853773   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.853804   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.853947   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.854483   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.854672   30598 main.go:141] libmachine: (ha-558946) Calling .DriverName
	I0910 18:00:49.854769   30598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:00:49.854813   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.854873   30598 ssh_runner.go:195] Run: cat /version.json
	I0910 18:00:49.854895   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHHostname
	I0910 18:00:49.857378   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.857709   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.857781   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.857819   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.857951   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.858138   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.858199   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:49.858222   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:49.858322   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.858497   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHPort
	I0910 18:00:49.858497   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 18:00:49.858614   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHKeyPath
	I0910 18:00:49.858755   30598 main.go:141] libmachine: (ha-558946) Calling .GetSSHUsername
	I0910 18:00:49.858876   30598 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/ha-558946/id_rsa Username:docker}
	I0910 18:00:49.938305   30598 ssh_runner.go:195] Run: systemctl --version
	I0910 18:00:49.961089   30598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:00:50.123504   30598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:00:50.129983   30598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:00:50.130037   30598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:00:50.139294   30598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 18:00:50.139316   30598 start.go:495] detecting cgroup driver to use...
	I0910 18:00:50.139367   30598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:00:50.156064   30598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:00:50.169400   30598 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:00:50.169453   30598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:00:50.183120   30598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:00:50.197572   30598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:00:50.347189   30598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:00:50.500164   30598 docker.go:233] disabling docker service ...
	I0910 18:00:50.500232   30598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:00:50.519863   30598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:00:50.534419   30598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:00:50.678225   30598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:00:50.839930   30598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:00:50.854950   30598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:00:50.873219   30598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:00:50.873284   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.883371   30598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:00:50.883422   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.893946   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.904305   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.914987   30598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:00:50.925757   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.936216   30598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.947374   30598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:00:50.957493   30598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:00:50.967431   30598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:00:50.977128   30598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:00:51.141132   30598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:00:51.359681   30598 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:00:51.359769   30598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:00:51.365265   30598 start.go:563] Will wait 60s for crictl version
	I0910 18:00:51.365319   30598 ssh_runner.go:195] Run: which crictl
	I0910 18:00:51.369292   30598 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:00:51.412354   30598 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:00:51.412443   30598 ssh_runner.go:195] Run: crio --version
	I0910 18:00:51.444894   30598 ssh_runner.go:195] Run: crio --version
	I0910 18:00:51.478266   30598 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:00:51.479386   30598 main.go:141] libmachine: (ha-558946) Calling .GetIP
	I0910 18:00:51.481963   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:51.482301   30598 main.go:141] libmachine: (ha-558946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:8f:4f", ip: ""} in network mk-ha-558946: {Iface:virbr1 ExpiryTime:2024-09-10 18:49:53 +0000 UTC Type:0 Mac:52:54:00:19:8f:4f Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-558946 Clientid:01:52:54:00:19:8f:4f}
	I0910 18:00:51.482327   30598 main.go:141] libmachine: (ha-558946) DBG | domain ha-558946 has defined IP address 192.168.39.109 and MAC address 52:54:00:19:8f:4f in network mk-ha-558946
	I0910 18:00:51.482517   30598 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 18:00:51.487420   30598 kubeadm.go:883] updating cluster {Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:00:51.487612   30598 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:00:51.487672   30598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:00:51.529965   30598 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:00:51.529986   30598 crio.go:433] Images already preloaded, skipping extraction
	I0910 18:00:51.530032   30598 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:00:51.565475   30598 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:00:51.565496   30598 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:00:51.565504   30598 kubeadm.go:934] updating node { 192.168.39.109 8443 v1.31.0 crio true true} ...
	I0910 18:00:51.565601   30598 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-558946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:00:51.565658   30598 ssh_runner.go:195] Run: crio config
	I0910 18:00:51.624935   30598 cni.go:84] Creating CNI manager for ""
	I0910 18:00:51.624954   30598 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0910 18:00:51.624970   30598 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:00:51.624991   30598 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-558946 NodeName:ha-558946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:00:51.625154   30598 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-558946"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:00:51.625180   30598 kube-vip.go:115] generating kube-vip config ...
	I0910 18:00:51.625222   30598 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0910 18:00:51.637492   30598 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0910 18:00:51.637583   30598 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0910 18:00:51.637631   30598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:00:51.647138   30598 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:00:51.647189   30598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0910 18:00:51.656377   30598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0910 18:00:51.672707   30598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:00:51.688415   30598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0910 18:00:51.704846   30598 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0910 18:00:51.721742   30598 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0910 18:00:51.725705   30598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:00:51.871355   30598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:00:51.886087   30598 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946 for IP: 192.168.39.109
	I0910 18:00:51.886126   30598 certs.go:194] generating shared ca certs ...
	I0910 18:00:51.886145   30598 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:00:51.886318   30598 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:00:51.886374   30598 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:00:51.886388   30598 certs.go:256] generating profile certs ...
	I0910 18:00:51.886489   30598 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/client.key
	I0910 18:00:51.886523   30598 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.38a11416
	I0910 18:00:51.886551   30598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.38a11416 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.109 192.168.39.96 192.168.39.241 192.168.39.254]
	I0910 18:00:52.140635   30598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.38a11416 ...
	I0910 18:00:52.140669   30598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.38a11416: {Name:mk08913af0cdeb71c169c88b43462bb77ddac860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:00:52.140848   30598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.38a11416 ...
	I0910 18:00:52.140861   30598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.38a11416: {Name:mk8e47ac795705402ab5bb9615c3b69d125b73ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:00:52.140950   30598 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt.38a11416 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt
	I0910 18:00:52.141111   30598 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key.38a11416 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key
	I0910 18:00:52.141251   30598 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key
	I0910 18:00:52.141267   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:00:52.141287   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:00:52.141303   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:00:52.141318   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:00:52.141334   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:00:52.141349   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:00:52.141363   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:00:52.141375   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:00:52.141429   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:00:52.141463   30598 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:00:52.141478   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:00:52.141508   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:00:52.141533   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:00:52.141560   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:00:52.141600   30598 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:00:52.141631   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 18:00:52.141647   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:00:52.141663   30598 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 18:00:52.142210   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:00:52.168458   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:00:52.191790   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:00:52.215143   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:00:52.238405   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0910 18:00:52.262460   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:00:52.288651   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:00:52.312566   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/ha-558946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:00:52.336090   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:00:52.360210   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:00:52.384087   30598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:00:52.407158   30598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:00:52.423104   30598 ssh_runner.go:195] Run: openssl version
	I0910 18:00:52.428929   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:00:52.439481   30598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:00:52.444185   30598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:00:52.444229   30598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:00:52.449783   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:00:52.458965   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:00:52.469437   30598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:00:52.473744   30598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:00:52.473779   30598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:00:52.479239   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:00:52.488304   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:00:52.498573   30598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:00:52.502888   30598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:00:52.502928   30598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:00:52.508437   30598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:00:52.517503   30598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:00:52.521996   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:00:52.527496   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:00:52.532849   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:00:52.538163   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:00:52.543867   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:00:52.549158   30598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:00:52.558344   30598 kubeadm.go:392] StartCluster: {Name:ha-558946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-558946 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:00:52.558444   30598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:00:52.558486   30598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:00:52.628849   30598 cri.go:89] found id: "76252b520e2e2ef7bed8846d0750cacf3bd574fc6f7c3662f0e367e820690317"
	I0910 18:00:52.628871   30598 cri.go:89] found id: "915faa9c083e42d87148d930b63d2760a0666c3b6af5efa1b22adaffcc7a4875"
	I0910 18:00:52.628876   30598 cri.go:89] found id: "5bdf2bcf00f8265018e407f2babfe0d87b9d40e5399bac6ae2db8ca05366d76f"
	I0910 18:00:52.628879   30598 cri.go:89] found id: "839bf8fe43954c8f890e2c72c1cdd9e7f7ea8b844dfb1726e564e776771c6e18"
	I0910 18:00:52.628882   30598 cri.go:89] found id: "142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557"
	I0910 18:00:52.628885   30598 cri.go:89] found id: "6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8"
	I0910 18:00:52.628887   30598 cri.go:89] found id: "e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d"
	I0910 18:00:52.628889   30598 cri.go:89] found id: "1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c"
	I0910 18:00:52.628892   30598 cri.go:89] found id: "284b2d71723b7871cbf3305fb262bc61d05babb848d5f60cc805c17f9bd0a04e"
	I0910 18:00:52.628898   30598 cri.go:89] found id: "edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc"
	I0910 18:00:52.628900   30598 cri.go:89] found id: "a97a13adca4b5dce2fca4dd9c379c35eb00fbc1282dbac89320ee7500e30af5d"
	I0910 18:00:52.628903   30598 cri.go:89] found id: "4056c90198fe8b04ee4ae9c0719d9142563fc0c3951c133cd4f874b1f144a509"
	I0910 18:00:52.628905   30598 cri.go:89] found id: "5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa"
	I0910 18:00:52.628908   30598 cri.go:89] found id: ""
	I0910 18:00:52.628945   30598 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.897245947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991582897219957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c023fd0-7194-4f49-aad6-2151211324d9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.897766606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00e99727-40f1-4ec5-9d17-aaecd1941a49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.897837461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00e99727-40f1-4ec5-9d17-aaecd1941a49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.898325833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6b2ffc5626f3775e9fb373cfb1b3350651ba8735b9190ab8dda1f3dbe9f1a30,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725991313294806034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725991299321524381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725991299307616134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93ca6fe5c1facf9e80ebe158beabecc3d1a8c32be0c454e3c89a52fa894422,PodSandboxId:6b30967ed8560b2e42d3fbb805e0e4872e4cefd3ac86bfe9d100580327591709,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725991291620475922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10efff272db119d6c6a88edf63354fef6d528a7cadb208799e802d1e8affa0b7,PodSandboxId:34b147bddaf58b77c20fb3fdd19c44187b9cbd620c70e7fa7178429411bf12ae,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725991273167566246,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1dc86399c32e0e26e2d6ddcbf3bc74,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71,PodSandboxId:a619eb4fb7dfbdd628ddf6c50657882a796c6ff57fdb8afae83432d53d414f6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725991258474946983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:7eeb82852ac4491fb60767bb4952b2f14f7290f33ee4288e83e54b4a39a88bf6,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725991258360296243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b,PodSandboxId:4e63c4c6bb84c930ce7667a861a21547562e48664bc266dca4e266f01c7d39cc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725991258478868951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2585fff
0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201,PodSandboxId:ad86a56a2ec675678c65c8c1f31d87acfcaf5bb0213a5163f0c39e813506ef17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725991258314903492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1780eab1fe91d63193dad7c68cc1
e73d7dca8ee77fb82736e04e7d94764d9a,PodSandboxId:8a25054429ab3fcfe2f73f86024c9d64933a9fd4c9b3569ce27fd03063451bc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991258191376223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725991258127622656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d,PodSandboxId:0ba1ce38894b1c79a7f8588fb5f29b8949fe2a4d8ef3540d3da58ab2e4701a14,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725991257947162806,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725991257951224240,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc,PodSandboxId:0d47a98eb3112a7fef1b7583c513f3ae9adff0eadaa82f636d7862596d2eac7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991252749804505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725990770310858417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990640053719525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990639993359914,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725990628186559343,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725990625854761467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725990614322273529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725990614273365582,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00e99727-40f1-4ec5-9d17-aaecd1941a49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.942288172Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eafd7527-ab2f-405e-8bd3-5b5a1081cf74 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.942376654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eafd7527-ab2f-405e-8bd3-5b5a1081cf74 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.945655854Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e78c8da5-ad6a-41e4-8463-7c4d236ddf97 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.946177996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991582946147136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e78c8da5-ad6a-41e4-8463-7c4d236ddf97 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.946749269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=209d3455-18ff-48ea-984b-bbf6da6f61ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.946835277Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=209d3455-18ff-48ea-984b-bbf6da6f61ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.947364834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6b2ffc5626f3775e9fb373cfb1b3350651ba8735b9190ab8dda1f3dbe9f1a30,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725991313294806034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725991299321524381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725991299307616134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93ca6fe5c1facf9e80ebe158beabecc3d1a8c32be0c454e3c89a52fa894422,PodSandboxId:6b30967ed8560b2e42d3fbb805e0e4872e4cefd3ac86bfe9d100580327591709,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725991291620475922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10efff272db119d6c6a88edf63354fef6d528a7cadb208799e802d1e8affa0b7,PodSandboxId:34b147bddaf58b77c20fb3fdd19c44187b9cbd620c70e7fa7178429411bf12ae,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725991273167566246,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1dc86399c32e0e26e2d6ddcbf3bc74,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71,PodSandboxId:a619eb4fb7dfbdd628ddf6c50657882a796c6ff57fdb8afae83432d53d414f6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725991258474946983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:7eeb82852ac4491fb60767bb4952b2f14f7290f33ee4288e83e54b4a39a88bf6,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725991258360296243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b,PodSandboxId:4e63c4c6bb84c930ce7667a861a21547562e48664bc266dca4e266f01c7d39cc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725991258478868951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2585fff
0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201,PodSandboxId:ad86a56a2ec675678c65c8c1f31d87acfcaf5bb0213a5163f0c39e813506ef17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725991258314903492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1780eab1fe91d63193dad7c68cc1
e73d7dca8ee77fb82736e04e7d94764d9a,PodSandboxId:8a25054429ab3fcfe2f73f86024c9d64933a9fd4c9b3569ce27fd03063451bc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991258191376223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725991258127622656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d,PodSandboxId:0ba1ce38894b1c79a7f8588fb5f29b8949fe2a4d8ef3540d3da58ab2e4701a14,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725991257947162806,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725991257951224240,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc,PodSandboxId:0d47a98eb3112a7fef1b7583c513f3ae9adff0eadaa82f636d7862596d2eac7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991252749804505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725990770310858417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990640053719525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990639993359914,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725990628186559343,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725990625854761467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725990614322273529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725990614273365582,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=209d3455-18ff-48ea-984b-bbf6da6f61ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.989675152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c992e029-281b-4813-a8ca-e4c3012d9bee name=/runtime.v1.RuntimeService/Version
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.989747211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c992e029-281b-4813-a8ca-e4c3012d9bee name=/runtime.v1.RuntimeService/Version
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.991246175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40c91874-b07f-4060-aff6-3d1e036c163a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.991695054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991582991668768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40c91874-b07f-4060-aff6-3d1e036c163a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.992321613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d351f33c-c1c0-4978-bef4-9cded86f5adb name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.992375318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d351f33c-c1c0-4978-bef4-9cded86f5adb name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:22 ha-558946 crio[3620]: time="2024-09-10 18:06:22.992785556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6b2ffc5626f3775e9fb373cfb1b3350651ba8735b9190ab8dda1f3dbe9f1a30,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725991313294806034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725991299321524381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725991299307616134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93ca6fe5c1facf9e80ebe158beabecc3d1a8c32be0c454e3c89a52fa894422,PodSandboxId:6b30967ed8560b2e42d3fbb805e0e4872e4cefd3ac86bfe9d100580327591709,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725991291620475922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10efff272db119d6c6a88edf63354fef6d528a7cadb208799e802d1e8affa0b7,PodSandboxId:34b147bddaf58b77c20fb3fdd19c44187b9cbd620c70e7fa7178429411bf12ae,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725991273167566246,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1dc86399c32e0e26e2d6ddcbf3bc74,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71,PodSandboxId:a619eb4fb7dfbdd628ddf6c50657882a796c6ff57fdb8afae83432d53d414f6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725991258474946983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:7eeb82852ac4491fb60767bb4952b2f14f7290f33ee4288e83e54b4a39a88bf6,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725991258360296243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b,PodSandboxId:4e63c4c6bb84c930ce7667a861a21547562e48664bc266dca4e266f01c7d39cc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725991258478868951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2585fff
0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201,PodSandboxId:ad86a56a2ec675678c65c8c1f31d87acfcaf5bb0213a5163f0c39e813506ef17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725991258314903492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1780eab1fe91d63193dad7c68cc1
e73d7dca8ee77fb82736e04e7d94764d9a,PodSandboxId:8a25054429ab3fcfe2f73f86024c9d64933a9fd4c9b3569ce27fd03063451bc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991258191376223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725991258127622656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d,PodSandboxId:0ba1ce38894b1c79a7f8588fb5f29b8949fe2a4d8ef3540d3da58ab2e4701a14,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725991257947162806,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725991257951224240,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc,PodSandboxId:0d47a98eb3112a7fef1b7583c513f3ae9adff0eadaa82f636d7862596d2eac7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991252749804505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725990770310858417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990640053719525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990639993359914,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725990628186559343,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725990625854761467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725990614322273529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725990614273365582,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d351f33c-c1c0-4978-bef4-9cded86f5adb name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:23 ha-558946 crio[3620]: time="2024-09-10 18:06:23.049755391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b11c5f6-0b0f-4e4b-80ef-6bb2e9a68ca9 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:06:23 ha-558946 crio[3620]: time="2024-09-10 18:06:23.049844949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b11c5f6-0b0f-4e4b-80ef-6bb2e9a68ca9 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:06:23 ha-558946 crio[3620]: time="2024-09-10 18:06:23.051216581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a71b3d8-506a-4c62-9eb9-850989e038d2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:06:23 ha-558946 crio[3620]: time="2024-09-10 18:06:23.051658586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991583051636598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a71b3d8-506a-4c62-9eb9-850989e038d2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:06:23 ha-558946 crio[3620]: time="2024-09-10 18:06:23.052182427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fac3f94-bc09-405b-8e10-53ff1f27abb5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:23 ha-558946 crio[3620]: time="2024-09-10 18:06:23.052279436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fac3f94-bc09-405b-8e10-53ff1f27abb5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:06:23 ha-558946 crio[3620]: time="2024-09-10 18:06:23.052668064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6b2ffc5626f3775e9fb373cfb1b3350651ba8735b9190ab8dda1f3dbe9f1a30,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725991313294806034,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725991299321524381,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725991299307616134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93ca6fe5c1facf9e80ebe158beabecc3d1a8c32be0c454e3c89a52fa894422,PodSandboxId:6b30967ed8560b2e42d3fbb805e0e4872e4cefd3ac86bfe9d100580327591709,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725991291620475922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10efff272db119d6c6a88edf63354fef6d528a7cadb208799e802d1e8affa0b7,PodSandboxId:34b147bddaf58b77c20fb3fdd19c44187b9cbd620c70e7fa7178429411bf12ae,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725991273167566246,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1dc86399c32e0e26e2d6ddcbf3bc74,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71,PodSandboxId:a619eb4fb7dfbdd628ddf6c50657882a796c6ff57fdb8afae83432d53d414f6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725991258474946983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:7eeb82852ac4491fb60767bb4952b2f14f7290f33ee4288e83e54b4a39a88bf6,PodSandboxId:682e75d7e519a2fd2267972de16d8ecf3066fe3c6475cdc7e98ae21d082739ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725991258360296243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baf5cd7e-5266-4d55-bd6c-459257baa463,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b,PodSandboxId:4e63c4c6bb84c930ce7667a861a21547562e48664bc266dca4e266f01c7d39cc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725991258478868951,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2585fff
0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201,PodSandboxId:ad86a56a2ec675678c65c8c1f31d87acfcaf5bb0213a5163f0c39e813506ef17,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725991258314903492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1780eab1fe91d63193dad7c68cc1
e73d7dca8ee77fb82736e04e7d94764d9a,PodSandboxId:8a25054429ab3fcfe2f73f86024c9d64933a9fd4c9b3569ce27fd03063451bc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991258191376223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b,PodSandboxId:d32b89c1b7c331b36461b8b13bbd526e4bc6241c7a258f24b5c9d649c6898aa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725991258127622656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4cb243a9afd92bb7fd74751dcfef866,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d,PodSandboxId:0ba1ce38894b1c79a7f8588fb5f29b8949fe2a4d8ef3540d3da58ab2e4701a14,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725991257947162806,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4,PodSandboxId:62cb63018e3103f7fc5e00bd4ca904e9524f8a7a92c7c3a33c1587a3ef278ea6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725991257951224240,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3bcac99226bc257a0bbe4358f2cf25,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc,PodSandboxId:0d47a98eb3112a7fef1b7583c513f3ae9adff0eadaa82f636d7862596d2eac7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725991252749804505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f35f5f9c0297fd9ab025fe3115dc06cad0defe8c7d8c46b7d3aeebb921e8d37,PodSandboxId:4704ca681891e2dca21eeae414c175cbab455ce080e1b9da3ef12f8ba150e893,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725990770310858417,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2t4ms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7344679f-13fd-466b-ad26-a77a20b9386a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557,PodSandboxId:1c4e9776e0278d5f7db72cad1da088fb23737a0991a99f30577e35caa155e308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990640053719525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pv7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e75ceddc-7576-45f6-8b80-2071bc7fbef8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8,PodSandboxId:434931d96929c88956bdf48a5ab808296d44f49907e090effad4cf325feaad41,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725990639993359914,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmcmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d79d296-3ee7-4b7b-8869-e45465da70ff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d,PodSandboxId:70857c92d854fe594707255272ce18015a71fc2350013cd89aa7cef7bbbc4968,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725990628186559343,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n8n67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019cf933-bf89-485d-a837-bf8bbedbc0df,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c,PodSandboxId:718077b7bfae64a887fbe3691ab5d6fa9f961c19dfe3911ee40143bbe7d3a7ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725990625854761467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjqzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35a3fe57-a2d6-4134-8205-ce5c8d09b707,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc,PodSandboxId:8c5d88f2921ad807ef90b2185330bf5b12d9f62ef57825a94a7bc1437481f2a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725990614322273529,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbd273a78c889b66df701581a530b4b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa,PodSandboxId:ca3c0af433ced8495f1ff89d8bee185a88e944ca35b1b17b7d2d6c0dea1ae00a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725990614273365582,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-558946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066fe90d6e5504c167c416bab3c626a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fac3f94-bc09-405b-8e10-53ff1f27abb5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d6b2ffc5626f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   682e75d7e519a       storage-provisioner
	4d16d6af2ae8b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   d32b89c1b7c33       kube-controller-manager-ha-558946
	2173425b282f1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   62cb63018e310       kube-apiserver-ha-558946
	fc93ca6fe5c1f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   6b30967ed8560       busybox-7dff88458-2t4ms
	10efff272db11       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   34b147bddaf58       kube-vip-ha-558946
	b47c7cf7abfab       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   4e63c4c6bb84c       kindnet-n8n67
	b8b6f7dc0df38       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   a619eb4fb7dfb       kube-proxy-gjqzx
	7eeb82852ac44       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   682e75d7e519a       storage-provisioner
	fd2585fff0689       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   ad86a56a2ec67       kube-scheduler-ha-558946
	7d1780eab1fe9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   8a25054429ab3       coredns-6f6b679f8f-5pv7s
	46aa5a70ba5d0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   d32b89c1b7c33       kube-controller-manager-ha-558946
	14554600b638e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   62cb63018e310       kube-apiserver-ha-558946
	bf78d03f37b8f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   0ba1ce38894b1       etcd-ha-558946
	186e126d69c5b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   0d47a98eb3112       coredns-6f6b679f8f-fmcmc
	7f35f5f9c0297       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   4704ca681891e       busybox-7dff88458-2t4ms
	142a15832796a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   1c4e9776e0278       coredns-6f6b679f8f-5pv7s
	6899c9efcedba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   434931d96929c       coredns-6f6b679f8f-fmcmc
	e119a0b88cc46       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    15 minutes ago      Exited              kindnet-cni               0                   70857c92d854f       kindnet-n8n67
	1668374a3d17c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      15 minutes ago      Exited              kube-proxy                0                   718077b7bfae6       kube-proxy-gjqzx
	edfccb881d415       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   8c5d88f2921ad       kube-scheduler-ha-558946
	5ebc6afb00309       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   ca3c0af433ced       etcd-ha-558946
	
	
	==> coredns [142a15832796a98511a32944c8be4a27dbd1f6fd17dd8cbbc7dd310a8241d557] <==
	[INFO] 10.244.2.2:55393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185738s
	[INFO] 10.244.2.2:37830 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000216713s
	[INFO] 10.244.2.2:45453 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139889s
	[INFO] 10.244.1.2:46063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168728s
	[INFO] 10.244.1.2:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116561s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1850&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1849&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1850&timeout=5m6s&timeoutSeconds=306&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1093767674]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.683) (total time: 11867ms):
	Trace[1093767674]: ---"Objects listed" error:Unauthorized 11867ms (17:59:17.550)
	Trace[1093767674]: [11.867351345s] [11.867351345s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[82884957]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.207) (total time: 12343ms):
	Trace[82884957]: ---"Objects listed" error:Unauthorized 12343ms (17:59:17.551)
	Trace[82884957]: [12.343941336s] [12.343941336s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1261403297]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.085) (total time: 12465ms):
	Trace[1261403297]: ---"Objects listed" error:Unauthorized 12465ms (17:59:17.551)
	Trace[1261403297]: [12.465997099s] [12.465997099s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [186e126d69c5bc43c518cab04c696af90a9916197b40f61274a79ccb2b9cffcc] <==
	[INFO] plugin/kubernetes: Trace[2113675064]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:00:59.236) (total time: 10002ms):
	Trace[2113675064]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:01:09.238)
	Trace[2113675064]: [10.002225664s] [10.002225664s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[50871242]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:01:08.235) (total time: 10001ms):
	Trace[50871242]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:01:18.236)
	Trace[50871242]: [10.001973146s] [10.001973146s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41410->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1060531173]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:01:11.790) (total time: 11819ms):
	Trace[1060531173]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41410->10.96.0.1:443: read: connection reset by peer 11818ms (18:01:23.608)
	Trace[1060531173]: [11.819130281s] [11.819130281s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41410->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6899c9efcedba632aba4ed362f6fed10345cdb75a4bfe243c753875f33f964f8] <==
	[INFO] 10.244.0.4:34074 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013369s
	[INFO] 10.244.0.4:34879 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107543s
	[INFO] 10.244.2.2:60365 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000255288s
	[INFO] 10.244.1.2:49914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123225s
	[INFO] 10.244.1.2:59420 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122155s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1850&timeout=5m10s&timeoutSeconds=310&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1826&timeout=8m56s&timeoutSeconds=536&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1822&timeout=6m45s&timeoutSeconds=405&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[496628189]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.450) (total time: 12099ms):
	Trace[496628189]: ---"Objects listed" error:Unauthorized 12099ms (17:59:17.550)
	Trace[496628189]: [12.099830739s] [12.099830739s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1525756423]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.525) (total time: 12025ms):
	Trace[1525756423]: ---"Objects listed" error:Unauthorized 12025ms (17:59:17.550)
	Trace[1525756423]: [12.025488262s] [12.025488262s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1080527706]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 17:59:05.149) (total time: 12402ms):
	Trace[1080527706]: ---"Objects listed" error:Unauthorized 12402ms (17:59:17.551)
	Trace[1080527706]: [12.402937793s] [12.402937793s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7d1780eab1fe91d63193dad7c68cc1e73d7dca8ee77fb82736e04e7d94764d9a] <==
	Trace[1492769942]: [13.832963258s] [13.832963258s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60524->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60520->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1467704825]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:01:09.764) (total time: 13844ms):
	Trace[1467704825]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60520->10.96.0.1:443: read: connection reset by peer 13844ms (18:01:23.609)
	Trace[1467704825]: [13.844307192s] [13.844307192s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60520->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-558946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_50_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:50:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:06:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:01:47 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:01:47 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:01:47 +0000   Tue, 10 Sep 2024 17:50:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:01:47 +0000   Tue, 10 Sep 2024 17:50:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    ha-558946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6888e6da1bdd45dda1c087615a5c1996
	  System UUID:                6888e6da-1bdd-45dd-a1c0-87615a5c1996
	  Boot ID:                    a2579398-c9ae-48e0-a407-b08542361a94
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2t4ms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-5pv7s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-fmcmc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-558946                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-n8n67                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-558946             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-558946    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-gjqzx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-558946             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-558946                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m44s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-558946 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-558946 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-558946 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-558946 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-558946 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-558946 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           15m                    node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-558946 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Warning  ContainerGCFailed        6m3s (x2 over 7m3s)    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m49s (x2 over 6m14s)  kubelet          Node ha-558946 status is now: NodeNotReady
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal   RegisteredNode           4m39s                  node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-558946 event: Registered Node ha-558946 in Controller
	
	
	Name:               ha-558946-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_51_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:51:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:06:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:02:22 +0000   Tue, 10 Sep 2024 18:01:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:02:22 +0000   Tue, 10 Sep 2024 18:01:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:02:22 +0000   Tue, 10 Sep 2024 18:01:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:02:22 +0000   Tue, 10 Sep 2024 18:01:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-558946-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db1a36bf29714274bd4e3db4349b13e5
	  System UUID:                db1a36bf-2971-4274-bd4e-3db4349b13e5
	  Boot ID:                    b212953d-76e1-4d89-8b39-baac7eb29a58
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xnl8m                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-558946-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-sfr7m                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-558946-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-558946-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-xggtm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-558946-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-558946-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m39s                kube-proxy       
	  Normal  Starting                 15m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)    kubelet          Node ha-558946-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)    kubelet          Node ha-558946-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)    kubelet          Node ha-558946-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                  node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           15m                  node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           13m                  node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  NodeNotReady             11m                  node-controller  Node ha-558946-m02 status is now: NodeNotReady
	  Normal  Starting                 5m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node ha-558946-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node ha-558946-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s (x7 over 5m9s)  kubelet          Node ha-558946-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m48s                node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           4m39s                node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	  Normal  RegisteredNode           3m17s                node-controller  Node ha-558946-m02 event: Registered Node ha-558946-m02 in Controller
	
	
	Name:               ha-558946-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-558946-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=ha-558946
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T17_53_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:53:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-558946-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:03:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 10 Sep 2024 18:03:36 +0000   Tue, 10 Sep 2024 18:04:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 10 Sep 2024 18:03:36 +0000   Tue, 10 Sep 2024 18:04:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 10 Sep 2024 18:03:36 +0000   Tue, 10 Sep 2024 18:04:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 10 Sep 2024 18:03:36 +0000   Tue, 10 Sep 2024 18:04:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-558946-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aded0f54a0334cb59bab04e35bcf99b0
	  System UUID:                aded0f54-a033-4cb5-9bab-04e35bcf99b0
	  Boot ID:                    7cb07829-d4bc-4530-a664-dcc19ff07df6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ll82q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-7kzcw              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-mk6xt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-558946-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-558946-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-558946-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-558946-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   RegisteredNode           4m39s                  node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   NodeNotReady             4m8s                   node-controller  Node ha-558946-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-558946-m04 event: Registered Node ha-558946-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m47s (x3 over 2m47s)  kubelet          Node ha-558946-m04 has been rebooted, boot id: 7cb07829-d4bc-4530-a664-dcc19ff07df6
	  Normal   NodeHasSufficientMemory  2m47s (x4 over 2m47s)  kubelet          Node ha-558946-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x4 over 2m47s)  kubelet          Node ha-558946-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x4 over 2m47s)  kubelet          Node ha-558946-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m47s                  kubelet          Node ha-558946-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m47s (x2 over 2m47s)  kubelet          Node ha-558946-m04 status is now: NodeReady
	  Normal   NodeNotReady             104s                   node-controller  Node ha-558946-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep10 17:50] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.058035] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055902] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.190997] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.121180] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.267314] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.918739] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.478653] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.062428] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.320707] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +0.078655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.553971] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.155608] kauditd_printk_skb: 38 callbacks suppressed
	[Sep10 17:51] kauditd_printk_skb: 24 callbacks suppressed
	[Sep10 17:57] kauditd_printk_skb: 1 callbacks suppressed
	[Sep10 18:00] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.160317] systemd-fstab-generator[3557]: Ignoring "noauto" option for root device
	[  +0.176138] systemd-fstab-generator[3571]: Ignoring "noauto" option for root device
	[  +0.150443] systemd-fstab-generator[3583]: Ignoring "noauto" option for root device
	[  +0.300541] systemd-fstab-generator[3611]: Ignoring "noauto" option for root device
	[  +0.740504] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +5.938933] kauditd_printk_skb: 132 callbacks suppressed
	[Sep10 18:01] kauditd_printk_skb: 75 callbacks suppressed
	[ +50.631857] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [5ebc6afb00309d6d66dc9f2083311e126ee50fb9b817e7e4ade02598d80c24aa] <==
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/10 17:59:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-10T17:59:18.959485Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T17:59:18.959529Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-10T17:59:18.960911Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"22872ffef731375a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-10T17:59:18.961038Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961232Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961299Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961470Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961528Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961579Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"22872ffef731375a","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961604Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7f0112a792d03c41"}
	{"level":"info","ts":"2024-09-10T17:59:18.961612Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961620Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961652Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961719Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961760Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961803Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.961830Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T17:59:18.964739Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2024-09-10T17:59:18.964826Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2024-09-10T17:59:18.964835Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-558946","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	
	
	==> etcd [bf78d03f37b8f3a834445581ec46afb0a6d4024f4c9ccbd4773959726e096b6d] <==
	{"level":"warn","ts":"2024-09-10T18:02:59.431185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.602311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-10T18:02:59.431440Z","caller":"traceutil/trace.go:171","msg":"trace[1409820815] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:2417; }","duration":"112.985421ms","start":"2024-09-10T18:02:59.318432Z","end":"2024-09-10T18:02:59.431418Z","steps":["trace[1409820815] 'count revisions from in-memory index tree'  (duration: 99.726321ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T18:03:39.635215Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"d8fe3a58642295be","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"32.721881ms"}
	{"level":"warn","ts":"2024-09-10T18:03:39.635309Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"7f0112a792d03c41","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"32.821809ms"}
	{"level":"info","ts":"2024-09-10T18:03:39.636906Z","caller":"traceutil/trace.go:171","msg":"trace[1720712081] linearizableReadLoop","detail":"{readStateIndex:2988; appliedIndex:2988; }","duration":"110.877238ms","start":"2024-09-10T18:03:39.526002Z","end":"2024-09-10T18:03:39.636880Z","steps":["trace[1720712081] 'read index received'  (duration: 110.871919ms)","trace[1720712081] 'applied index is now lower than readState.Index'  (duration: 3.881µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T18:03:39.637276Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.248194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-8ldlx\" ","response":"range_response_count:1 size:4870"}
	{"level":"info","ts":"2024-09-10T18:03:39.637386Z","caller":"traceutil/trace.go:171","msg":"trace[870820715] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-8ldlx; range_end:; response_count:1; response_revision:2577; }","duration":"111.375877ms","start":"2024-09-10T18:03:39.525998Z","end":"2024-09-10T18:03:39.637374Z","steps":["trace[870820715] 'agreement among raft nodes before linearized reading'  (duration: 111.148455ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T18:03:49.712431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"22872ffef731375a switched to configuration voters=(2488010091260884826 9151616428725517377)"}
	{"level":"info","ts":"2024-09-10T18:03:49.714771Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"70942a38564785b0","local-member-id":"22872ffef731375a","removed-remote-peer-id":"d8fe3a58642295be","removed-remote-peer-urls":["https://192.168.39.241:2380"]}
	{"level":"info","ts":"2024-09-10T18:03:49.714889Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d8fe3a58642295be"}
	{"level":"warn","ts":"2024-09-10T18:03:49.715253Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:03:49.715323Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d8fe3a58642295be"}
	{"level":"warn","ts":"2024-09-10T18:03:49.715643Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:03:49.715719Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:03:49.715989Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"warn","ts":"2024-09-10T18:03:49.716462Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be","error":"context canceled"}
	{"level":"warn","ts":"2024-09-10T18:03:49.716539Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d8fe3a58642295be","error":"failed to read d8fe3a58642295be on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-10T18:03:49.716596Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"warn","ts":"2024-09-10T18:03:49.716898Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be","error":"context canceled"}
	{"level":"info","ts":"2024-09-10T18:03:49.717176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"22872ffef731375a","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:03:49.717282Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:03:49.717330Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"22872ffef731375a","removed-remote-peer-id":"d8fe3a58642295be"}
	{"level":"info","ts":"2024-09-10T18:03:49.717587Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"22872ffef731375a","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"d8fe3a58642295be"}
	{"level":"warn","ts":"2024-09-10T18:03:49.734149Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"22872ffef731375a","remote-peer-id-stream-handler":"22872ffef731375a","remote-peer-id-from":"d8fe3a58642295be"}
	{"level":"warn","ts":"2024-09-10T18:03:49.734849Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.241:50498","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:06:23 up 16 min,  0 users,  load average: 0.32, 0.43, 0.34
	Linux ha-558946 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b47c7cf7abfabb2b718302e2a0648d2751a45f43ae8380be7d32507b93bf3b1b] <==
	I0910 18:05:39.555588       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 18:05:49.552235       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 18:05:49.552271       1 main.go:299] handling current node
	I0910 18:05:49.552285       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 18:05:49.552289       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 18:05:49.552463       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 18:05:49.552490       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 18:05:59.549772       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 18:05:59.549894       1 main.go:299] handling current node
	I0910 18:05:59.549923       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 18:05:59.549943       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 18:05:59.550152       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 18:05:59.550184       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 18:06:09.553927       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 18:06:09.554114       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 18:06:09.554287       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 18:06:09.554336       1 main.go:299] handling current node
	I0910 18:06:09.554360       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 18:06:09.554377       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 18:06:19.553154       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 18:06:19.553303       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 18:06:19.553566       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 18:06:19.553607       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 18:06:19.553670       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 18:06:19.553677       1 main.go:299] handling current node
	
	
	==> kindnet [e119a0b88cc4676d588557cd49f607b32995ee9bfe4d82518a7e362956ac952d] <==
	I0910 17:58:39.330420       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:58:49.337684       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:58:49.337777       1 main.go:299] handling current node
	I0910 17:58:49.337804       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:58:49.337821       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:58:49.337959       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:58:49.337992       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:58:49.338133       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:58:49.338163       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:58:59.331635       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:58:59.331749       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:58:59.331930       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:58:59.331954       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	I0910 17:58:59.332013       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:58:59.332030       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:58:59.332238       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:58:59.332275       1 main.go:299] handling current node
	I0910 17:59:09.338839       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0910 17:59:09.338937       1 main.go:322] Node ha-558946-m04 has CIDR [10.244.3.0/24] 
	I0910 17:59:09.339183       1 main.go:295] Handling node with IPs: map[192.168.39.109:{}]
	I0910 17:59:09.339227       1 main.go:299] handling current node
	I0910 17:59:09.339255       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0910 17:59:09.339272       1 main.go:322] Node ha-558946-m02 has CIDR [10.244.1.0/24] 
	I0910 17:59:09.339350       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0910 17:59:09.339371       1 main.go:322] Node ha-558946-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [14554600b638e1a8c641d49ad069d21f787ee2ca8aa0be78b55e93f84c6895b4] <==
	I0910 18:00:58.526798       1 options.go:228] external host was not specified, using 192.168.39.109
	I0910 18:00:58.548464       1 server.go:142] Version: v1.31.0
	I0910 18:00:58.548522       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:00:59.721187       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0910 18:00:59.738174       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:00:59.744011       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0910 18:00:59.746152       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0910 18:00:59.746511       1 instance.go:232] Using reconciler: lease
	W0910 18:01:19.718576       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0910 18:01:19.718576       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0910 18:01:19.748416       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0910 18:01:19.748419       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [2173425b282f1680eb621b99d603c566252599facbc35fc1d2fb8df76fc2b318] <==
	I0910 18:01:41.476156       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0910 18:01:41.545799       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:01:41.545886       1 policy_source.go:224] refreshing policies
	I0910 18:01:41.564481       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0910 18:01:41.564574       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 18:01:41.564651       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 18:01:41.564910       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 18:01:41.565045       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 18:01:41.565297       1 shared_informer.go:320] Caches are synced for configmaps
	I0910 18:01:41.567213       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 18:01:41.577313       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0910 18:01:41.577365       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0910 18:01:41.577375       1 aggregator.go:171] initial CRD sync complete...
	I0910 18:01:41.577390       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 18:01:41.577395       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 18:01:41.577399       1 cache.go:39] Caches are synced for autoregister controller
	I0910 18:01:41.578179       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0910 18:01:41.579584       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.241 192.168.39.96]
	I0910 18:01:41.582346       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:01:41.595012       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0910 18:01:41.598476       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0910 18:01:41.642265       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 18:01:42.475444       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0910 18:01:43.016195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.109 192.168.39.96]
	W0910 18:04:03.020261       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.109 192.168.39.96]
	
	
	==> kube-controller-manager [46aa5a70ba5d09b21daa9afbfae7b536a67c11626d39d23e60b648bf231eb32b] <==
	I0910 18:00:59.225619       1 serving.go:386] Generated self-signed cert in-memory
	I0910 18:00:59.847672       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0910 18:00:59.847707       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:00:59.849417       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0910 18:00:59.849598       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 18:00:59.849815       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0910 18:00:59.850183       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0910 18:01:20.753987       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.109:8443/healthz\": dial tcp 192.168.39.109:8443: connect: connection refused"
	
	
	==> kube-controller-manager [4d16d6af2ae8be09fc14df4895ec76820b67e61632967bcac0af26aeadbcd339] <==
	I0910 18:04:39.939331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:04:39.964901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:04:40.013608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.04019ms"
	I0910 18:04:40.013871       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="100.095µs"
	I0910 18:04:41.161621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	E0910 18:04:44.884023       1 gc_controller.go:151] "Failed to get node" err="node \"ha-558946-m03\" not found" logger="pod-garbage-collector-controller" node="ha-558946-m03"
	E0910 18:04:44.884308       1 gc_controller.go:151] "Failed to get node" err="node \"ha-558946-m03\" not found" logger="pod-garbage-collector-controller" node="ha-558946-m03"
	E0910 18:04:44.884378       1 gc_controller.go:151] "Failed to get node" err="node \"ha-558946-m03\" not found" logger="pod-garbage-collector-controller" node="ha-558946-m03"
	E0910 18:04:44.884405       1 gc_controller.go:151] "Failed to get node" err="node \"ha-558946-m03\" not found" logger="pod-garbage-collector-controller" node="ha-558946-m03"
	E0910 18:04:44.884495       1 gc_controller.go:151] "Failed to get node" err="node \"ha-558946-m03\" not found" logger="pod-garbage-collector-controller" node="ha-558946-m03"
	I0910 18:04:44.896813       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-558946-m03"
	I0910 18:04:44.926524       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-558946-m03"
	I0910 18:04:44.926614       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-558946-m03"
	I0910 18:04:44.957732       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-558946-m03"
	I0910 18:04:44.957809       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mshf2"
	I0910 18:04:44.984841       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mshf2"
	I0910 18:04:44.984881       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-558946-m03"
	I0910 18:04:45.019544       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-558946-m03"
	I0910 18:04:45.019638       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-558946-m03"
	I0910 18:04:45.051302       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-558946-m03"
	I0910 18:04:45.051349       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8ldlx"
	I0910 18:04:45.092978       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8ldlx"
	I0910 18:04:45.093148       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-558946-m03"
	I0910 18:04:45.103650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-558946-m04"
	I0910 18:04:45.123423       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-558946-m03"
	
	
	==> kube-proxy [1668374a3d17c6c4af9669fe8c60235ad09a8a61dcd3b5990721b70e407e546c] <==
	E0910 17:58:14.874562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:17.946953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:17.947032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:17.947174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:17.947219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:21.017113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:21.017195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:27.160574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:27.160656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:27.160752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:27.160776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:27.160860       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:27.160875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:36.380413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:36.380539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:39.448917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:39.449286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:39.449646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:39.449717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:58:54.809699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:58:54.809889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-558946&resourceVersion=1850\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:59:04.026187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:59:04.026290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1814\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0910 17:59:07.097985       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754": dial tcp 192.168.39.254:8443: connect: no route to host
	E0910 17:59:07.098108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1754\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [b8b6f7dc0df387bab7c1cc41a014568a8ec1ab51dab00c335a986e02d4529b71] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:01:00.761347       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0910 18:01:03.832710       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0910 18:01:06.904625       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0910 18:01:13.048586       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0910 18:01:22.265672       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-558946\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0910 18:01:38.755957       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.109"]
	E0910 18:01:38.756170       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:01:38.814039       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:01:38.814132       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:01:38.814198       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:01:38.817273       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:01:38.818657       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:01:38.818710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:01:38.820947       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:01:38.821049       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:01:38.821598       1 config.go:197] "Starting service config controller"
	I0910 18:01:38.821621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:01:38.821949       1 config.go:326] "Starting node config controller"
	I0910 18:01:38.821990       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:01:38.922373       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:01:38.922374       1 shared_informer.go:320] Caches are synced for node config
	I0910 18:01:38.922414       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [edfccb881d415b9fa0959a1298d85b241e7d2914cbb4710e83aec1e27375d4dc] <==
	E0910 17:50:18.783701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 17:50:20.762780       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 17:53:20.783017       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7kzcw\": pod kindnet-7kzcw is already assigned to node \"ha-558946-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7kzcw" node="ha-558946-m04"
	E0910 17:53:20.783217       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a925295e-bc22-4154-850e-79962508c7ac(kube-system/kindnet-7kzcw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7kzcw"
	E0910 17:53:20.783245       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7kzcw\": pod kindnet-7kzcw is already assigned to node \"ha-558946-m04\"" pod="kube-system/kindnet-7kzcw"
	I0910 17:53:20.783283       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7kzcw" node="ha-558946-m04"
	E0910 17:53:20.926971       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9xbp8\": pod kindnet-9xbp8 is already assigned to node \"ha-558946-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9xbp8" node="ha-558946-m04"
	E0910 17:53:20.927165       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d228e8b7-bd1d-442c-bf6a-2240d8c2ac04(kube-system/kindnet-9xbp8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9xbp8"
	E0910 17:53:20.927360       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9xbp8\": pod kindnet-9xbp8 is already assigned to node \"ha-558946-m04\"" pod="kube-system/kindnet-9xbp8"
	I0910 17:53:20.927386       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9xbp8" node="ha-558946-m04"
	E0910 17:59:08.727812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0910 17:59:08.878739       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0910 17:59:09.755034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0910 17:59:11.234316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0910 17:59:11.530993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0910 17:59:11.724899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0910 17:59:11.942038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0910 17:59:12.292205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0910 17:59:15.354875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0910 17:59:16.082128       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0910 17:59:16.218485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0910 17:59:17.123378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0910 17:59:17.853287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0910 17:59:18.611800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0910 17:59:18.890951       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fd2585fff0689dfb9f34e31b9308bf3929d6411bf92c00b138057cd97140b201] <==
	W0910 18:01:36.280377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.109:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:36.280439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.109:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:36.438321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.109:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:36.438440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.109:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.050281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.109:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.050427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.109:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.250475       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.109:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.250603       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.109:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.464329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.109:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.464395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.109:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.698298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.109:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.698356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.109:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:37.816544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.109:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:37.816651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.109:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:38.278546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.109:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:38.278604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.109:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:38.897899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.109:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:38.898142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.109:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:38.943024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.109:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.109:8443: connect: connection refused
	E0910 18:01:38.943250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.109:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.109:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:01:41.487824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 18:01:41.487921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:01:41.488023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 18:01:41.488106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 18:01:51.382897       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 18:05:10 ha-558946 kubelet[1318]: E0910 18:05:10.560344    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991510559845314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:05:20 ha-558946 kubelet[1318]: E0910 18:05:20.302842    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:05:20 ha-558946 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:05:20 ha-558946 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:05:20 ha-558946 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:05:20 ha-558946 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:05:20 ha-558946 kubelet[1318]: E0910 18:05:20.561677    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991520561317069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:05:20 ha-558946 kubelet[1318]: E0910 18:05:20.561870    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991520561317069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:05:30 ha-558946 kubelet[1318]: E0910 18:05:30.563453    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991530563166637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:05:30 ha-558946 kubelet[1318]: E0910 18:05:30.563884    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991530563166637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:05:40 ha-558946 kubelet[1318]: E0910 18:05:40.566416    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991540565810148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:05:40 ha-558946 kubelet[1318]: E0910 18:05:40.566465    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991540565810148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:05:50 ha-558946 kubelet[1318]: E0910 18:05:50.568341    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991550567500766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:05:50 ha-558946 kubelet[1318]: E0910 18:05:50.568733    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991550567500766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:06:00 ha-558946 kubelet[1318]: E0910 18:06:00.570905    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991560570593483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:06:00 ha-558946 kubelet[1318]: E0910 18:06:00.570943    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991560570593483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:06:10 ha-558946 kubelet[1318]: E0910 18:06:10.575237    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991570573609987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:06:10 ha-558946 kubelet[1318]: E0910 18:06:10.575591    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991570573609987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:06:20 ha-558946 kubelet[1318]: E0910 18:06:20.297843    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:06:20 ha-558946 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:06:20 ha-558946 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:06:20 ha-558946 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:06:20 ha-558946 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:06:20 ha-558946 kubelet[1318]: E0910 18:06:20.576836    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991580576596200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:06:20 ha-558946 kubelet[1318]: E0910 18:06:20.576875    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725991580576596200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:06:22.606855   33006 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19598-5973/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-558946 -n ha-558946
helpers_test.go:261: (dbg) Run:  kubectl --context ha-558946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-925076
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-925076
E0910 18:21:35.174493   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-925076: exit status 82 (2m1.784344243s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-925076-m03"  ...
	* Stopping node "multinode-925076-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-925076" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-925076 --wait=true -v=8 --alsologtostderr
E0910 18:23:56.538877   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-925076 --wait=true -v=8 --alsologtostderr: (3m23.469201954s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-925076
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-925076 -n multinode-925076
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-925076 logs -n 25: (1.506399114s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m02:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2183249346/001/cp-test_multinode-925076-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m02:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076:/home/docker/cp-test_multinode-925076-m02_multinode-925076.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n multinode-925076 sudo cat                                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-925076-m02_multinode-925076.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m02:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03:/home/docker/cp-test_multinode-925076-m02_multinode-925076-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n multinode-925076-m03 sudo cat                                   | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-925076-m02_multinode-925076-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp testdata/cp-test.txt                                                | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m03:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2183249346/001/cp-test_multinode-925076-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m03:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076:/home/docker/cp-test_multinode-925076-m03_multinode-925076.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n multinode-925076 sudo cat                                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-925076-m03_multinode-925076.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m03:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m02:/home/docker/cp-test_multinode-925076-m03_multinode-925076-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n multinode-925076-m02 sudo cat                                   | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-925076-m03_multinode-925076-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-925076 node stop m03                                                          | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	| node    | multinode-925076 node start                                                             | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-925076                                                                | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC |                     |
	| stop    | -p multinode-925076                                                                     | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC |                     |
	| start   | -p multinode-925076                                                                     | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:22 UTC | 10 Sep 24 18:26 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-925076                                                                | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:26 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:22:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:22:56.034651   42658 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:22:56.035085   42658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:22:56.035104   42658 out.go:358] Setting ErrFile to fd 2...
	I0910 18:22:56.035122   42658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:22:56.035588   42658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:22:56.036466   42658 out.go:352] Setting JSON to false
	I0910 18:22:56.037425   42658 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3928,"bootTime":1725988648,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:22:56.037490   42658 start.go:139] virtualization: kvm guest
	I0910 18:22:56.039270   42658 out.go:177] * [multinode-925076] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:22:56.040726   42658 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:22:56.040726   42658 notify.go:220] Checking for updates...
	I0910 18:22:56.043195   42658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:22:56.044422   42658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:22:56.045532   42658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:22:56.046685   42658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:22:56.047912   42658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:22:56.049332   42658 config.go:182] Loaded profile config "multinode-925076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:22:56.049456   42658 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:22:56.049880   42658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:22:56.049932   42658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:22:56.064454   42658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0910 18:22:56.064872   42658 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:22:56.065481   42658 main.go:141] libmachine: Using API Version  1
	I0910 18:22:56.065503   42658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:22:56.065813   42658 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:22:56.065968   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:22:56.100319   42658 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:22:56.101356   42658 start.go:297] selected driver: kvm2
	I0910 18:22:56.101369   42658 start.go:901] validating driver "kvm2" against &{Name:multinode-925076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-925076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:22:56.101501   42658 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:22:56.101799   42658 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:22:56.101860   42658 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:22:56.115445   42658 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:22:56.116085   42658 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:22:56.116146   42658 cni.go:84] Creating CNI manager for ""
	I0910 18:22:56.116156   42658 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 18:22:56.116202   42658 start.go:340] cluster config:
	{Name:multinode-925076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-925076 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:22:56.116335   42658 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:22:56.118668   42658 out.go:177] * Starting "multinode-925076" primary control-plane node in "multinode-925076" cluster
	I0910 18:22:56.119806   42658 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:22:56.119830   42658 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:22:56.119838   42658 cache.go:56] Caching tarball of preloaded images
	I0910 18:22:56.119898   42658 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:22:56.119907   42658 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 18:22:56.120024   42658 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/config.json ...
	I0910 18:22:56.120210   42658 start.go:360] acquireMachinesLock for multinode-925076: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:22:56.120247   42658 start.go:364] duration metric: took 21.961µs to acquireMachinesLock for "multinode-925076"
	I0910 18:22:56.120259   42658 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:22:56.120270   42658 fix.go:54] fixHost starting: 
	I0910 18:22:56.120510   42658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:22:56.120539   42658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:22:56.134186   42658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44625
	I0910 18:22:56.134611   42658 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:22:56.135085   42658 main.go:141] libmachine: Using API Version  1
	I0910 18:22:56.135107   42658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:22:56.135389   42658 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:22:56.135540   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:22:56.135710   42658 main.go:141] libmachine: (multinode-925076) Calling .GetState
	I0910 18:22:56.137098   42658 fix.go:112] recreateIfNeeded on multinode-925076: state=Running err=<nil>
	W0910 18:22:56.137114   42658 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:22:56.138927   42658 out.go:177] * Updating the running kvm2 "multinode-925076" VM ...
	I0910 18:22:56.140044   42658 machine.go:93] provisionDockerMachine start ...
	I0910 18:22:56.140062   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:22:56.140247   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.142701   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.143149   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.143176   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.143305   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.143446   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.143609   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.143740   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.143890   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:56.144073   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:22:56.144082   42658 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:22:56.254035   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-925076
	
	I0910 18:22:56.254062   42658 main.go:141] libmachine: (multinode-925076) Calling .GetMachineName
	I0910 18:22:56.254333   42658 buildroot.go:166] provisioning hostname "multinode-925076"
	I0910 18:22:56.254361   42658 main.go:141] libmachine: (multinode-925076) Calling .GetMachineName
	I0910 18:22:56.254562   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.257527   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.257840   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.257878   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.258029   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.258199   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.258372   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.258499   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.258691   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:56.258849   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:22:56.258863   42658 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-925076 && echo "multinode-925076" | sudo tee /etc/hostname
	I0910 18:22:56.381830   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-925076
	
	I0910 18:22:56.381859   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.384556   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.384939   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.384967   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.385140   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.385352   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.385517   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.385656   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.385788   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:56.386001   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:22:56.386018   42658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-925076' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-925076/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-925076' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:22:56.498460   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:22:56.498493   42658 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:22:56.498530   42658 buildroot.go:174] setting up certificates
	I0910 18:22:56.498540   42658 provision.go:84] configureAuth start
	I0910 18:22:56.498549   42658 main.go:141] libmachine: (multinode-925076) Calling .GetMachineName
	I0910 18:22:56.498852   42658 main.go:141] libmachine: (multinode-925076) Calling .GetIP
	I0910 18:22:56.501431   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.501879   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.501916   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.502101   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.504190   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.504515   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.504547   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.504696   42658 provision.go:143] copyHostCerts
	I0910 18:22:56.504731   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:22:56.504768   42658 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:22:56.504779   42658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:22:56.504850   42658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:22:56.504926   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:22:56.504943   42658 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:22:56.504950   42658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:22:56.504974   42658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:22:56.505015   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:22:56.505031   42658 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:22:56.505037   42658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:22:56.505063   42658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:22:56.505138   42658 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.multinode-925076 san=[127.0.0.1 192.168.39.248 localhost minikube multinode-925076]
	I0910 18:22:56.718149   42658 provision.go:177] copyRemoteCerts
	I0910 18:22:56.718206   42658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:22:56.718226   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.721188   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.721592   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.721619   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.721833   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.722074   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.722232   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.722385   42658 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076/id_rsa Username:docker}
	I0910 18:22:56.803240   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 18:22:56.803321   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:22:56.828440   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 18:22:56.828494   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0910 18:22:56.851622   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 18:22:56.851695   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:22:56.876280   42658 provision.go:87] duration metric: took 377.728415ms to configureAuth
	I0910 18:22:56.876305   42658 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:22:56.876528   42658 config.go:182] Loaded profile config "multinode-925076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:22:56.876597   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.879082   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.879452   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.879484   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.879650   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.879833   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.879971   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.880080   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.880267   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:56.880449   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:22:56.880470   42658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:24:27.511751   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:24:27.511785   42658 machine.go:96] duration metric: took 1m31.371726745s to provisionDockerMachine
	I0910 18:24:27.511805   42658 start.go:293] postStartSetup for "multinode-925076" (driver="kvm2")
	I0910 18:24:27.511843   42658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:24:27.511868   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.512240   42658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:24:27.512272   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:24:27.515268   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.515586   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.515609   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.515773   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:24:27.515953   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.516092   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:24:27.516219   42658 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076/id_rsa Username:docker}
	I0910 18:24:27.601195   42658 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:24:27.605580   42658 command_runner.go:130] > NAME=Buildroot
	I0910 18:24:27.605607   42658 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 18:24:27.605614   42658 command_runner.go:130] > ID=buildroot
	I0910 18:24:27.605620   42658 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 18:24:27.605628   42658 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 18:24:27.605801   42658 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:24:27.605823   42658 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:24:27.605907   42658 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:24:27.605987   42658 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:24:27.605996   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 18:24:27.606071   42658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:24:27.615918   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:24:27.639523   42658 start.go:296] duration metric: took 127.706079ms for postStartSetup
	I0910 18:24:27.639579   42658 fix.go:56] duration metric: took 1m31.519296068s for fixHost
	I0910 18:24:27.639606   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:24:27.641810   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.642191   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.642215   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.642354   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:24:27.642543   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.642698   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.642817   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:24:27.642952   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:24:27.643152   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:24:27.643163   42658 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:24:27.745591   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725992667.718605174
	
	I0910 18:24:27.745613   42658 fix.go:216] guest clock: 1725992667.718605174
	I0910 18:24:27.745619   42658 fix.go:229] Guest: 2024-09-10 18:24:27.718605174 +0000 UTC Remote: 2024-09-10 18:24:27.639587581 +0000 UTC m=+91.639859880 (delta=79.017593ms)
	I0910 18:24:27.745649   42658 fix.go:200] guest clock delta is within tolerance: 79.017593ms
	I0910 18:24:27.745656   42658 start.go:83] releasing machines lock for "multinode-925076", held for 1m31.625400367s
	I0910 18:24:27.745686   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.745917   42658 main.go:141] libmachine: (multinode-925076) Calling .GetIP
	I0910 18:24:27.748131   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.748492   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.748529   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.748635   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.749097   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.749247   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.749346   42658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:24:27.749415   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:24:27.749431   42658 ssh_runner.go:195] Run: cat /version.json
	I0910 18:24:27.749453   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:24:27.751781   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.751896   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.752176   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.752204   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.752316   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:24:27.752441   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.752470   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.752480   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.752652   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:24:27.752685   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:24:27.752809   42658 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076/id_rsa Username:docker}
	I0910 18:24:27.752871   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.753006   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:24:27.753149   42658 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076/id_rsa Username:docker}
	I0910 18:24:27.829144   42658 command_runner.go:130] > {"iso_version": "v1.34.0-1725912912-19598", "kicbase_version": "v0.0.45", "minikube_version": "v1.34.0", "commit": "a47e98bacf93197560d0f08408949de0434951d5"}
	I0910 18:24:27.849471   42658 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0910 18:24:27.850229   42658 ssh_runner.go:195] Run: systemctl --version
	I0910 18:24:27.856126   42658 command_runner.go:130] > systemd 252 (252)
	I0910 18:24:27.856156   42658 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0910 18:24:27.856209   42658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:24:28.009619   42658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 18:24:28.017645   42658 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0910 18:24:28.017842   42658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:24:28.017899   42658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:24:28.027151   42658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 18:24:28.027170   42658 start.go:495] detecting cgroup driver to use...
	I0910 18:24:28.027219   42658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:24:28.042943   42658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:24:28.057013   42658 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:24:28.057064   42658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:24:28.069889   42658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:24:28.082724   42658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:24:28.225690   42658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:24:28.362524   42658 docker.go:233] disabling docker service ...
	I0910 18:24:28.362587   42658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:24:28.379066   42658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:24:28.392555   42658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:24:28.526705   42658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:24:28.665198   42658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:24:28.678339   42658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:24:28.698834   42658 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0910 18:24:28.698880   42658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:24:28.698920   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.709239   42658 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:24:28.709304   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.719409   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.729384   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.739496   42658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:24:28.749500   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.759515   42658 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.771235   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.781740   42658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:24:28.791329   42658 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 18:24:28.791395   42658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:24:28.802097   42658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:24:28.936470   42658 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:24:33.778790   42658 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.84228859s)
	I0910 18:24:33.778821   42658 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:24:33.778871   42658 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:24:33.784570   42658 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0910 18:24:33.784592   42658 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 18:24:33.784601   42658 command_runner.go:130] > Device: 0,22	Inode: 1313        Links: 1
	I0910 18:24:33.784611   42658 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 18:24:33.784619   42658 command_runner.go:130] > Access: 2024-09-10 18:24:33.695649200 +0000
	I0910 18:24:33.784633   42658 command_runner.go:130] > Modify: 2024-09-10 18:24:33.645647827 +0000
	I0910 18:24:33.784648   42658 command_runner.go:130] > Change: 2024-09-10 18:24:33.645647827 +0000
	I0910 18:24:33.784655   42658 command_runner.go:130] >  Birth: -
	I0910 18:24:33.784794   42658 start.go:563] Will wait 60s for crictl version
	I0910 18:24:33.784842   42658 ssh_runner.go:195] Run: which crictl
	I0910 18:24:33.788723   42658 command_runner.go:130] > /usr/bin/crictl
	I0910 18:24:33.788792   42658 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:24:33.833338   42658 command_runner.go:130] > Version:  0.1.0
	I0910 18:24:33.833360   42658 command_runner.go:130] > RuntimeName:  cri-o
	I0910 18:24:33.833365   42658 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0910 18:24:33.833370   42658 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 18:24:33.833386   42658 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:24:33.833454   42658 ssh_runner.go:195] Run: crio --version
	I0910 18:24:33.861992   42658 command_runner.go:130] > crio version 1.29.1
	I0910 18:24:33.862013   42658 command_runner.go:130] > Version:        1.29.1
	I0910 18:24:33.862019   42658 command_runner.go:130] > GitCommit:      unknown
	I0910 18:24:33.862023   42658 command_runner.go:130] > GitCommitDate:  unknown
	I0910 18:24:33.862027   42658 command_runner.go:130] > GitTreeState:   clean
	I0910 18:24:33.862035   42658 command_runner.go:130] > BuildDate:      2024-09-10T02:34:15Z
	I0910 18:24:33.862040   42658 command_runner.go:130] > GoVersion:      go1.21.6
	I0910 18:24:33.862043   42658 command_runner.go:130] > Compiler:       gc
	I0910 18:24:33.862053   42658 command_runner.go:130] > Platform:       linux/amd64
	I0910 18:24:33.862059   42658 command_runner.go:130] > Linkmode:       dynamic
	I0910 18:24:33.862068   42658 command_runner.go:130] > BuildTags:      
	I0910 18:24:33.862075   42658 command_runner.go:130] >   containers_image_ostree_stub
	I0910 18:24:33.862085   42658 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0910 18:24:33.862090   42658 command_runner.go:130] >   btrfs_noversion
	I0910 18:24:33.862106   42658 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0910 18:24:33.862112   42658 command_runner.go:130] >   libdm_no_deferred_remove
	I0910 18:24:33.862116   42658 command_runner.go:130] >   seccomp
	I0910 18:24:33.862120   42658 command_runner.go:130] > LDFlags:          unknown
	I0910 18:24:33.862128   42658 command_runner.go:130] > SeccompEnabled:   true
	I0910 18:24:33.862132   42658 command_runner.go:130] > AppArmorEnabled:  false
	I0910 18:24:33.862245   42658 ssh_runner.go:195] Run: crio --version
	I0910 18:24:33.890427   42658 command_runner.go:130] > crio version 1.29.1
	I0910 18:24:33.890448   42658 command_runner.go:130] > Version:        1.29.1
	I0910 18:24:33.890470   42658 command_runner.go:130] > GitCommit:      unknown
	I0910 18:24:33.890476   42658 command_runner.go:130] > GitCommitDate:  unknown
	I0910 18:24:33.890483   42658 command_runner.go:130] > GitTreeState:   clean
	I0910 18:24:33.890492   42658 command_runner.go:130] > BuildDate:      2024-09-10T02:34:15Z
	I0910 18:24:33.890499   42658 command_runner.go:130] > GoVersion:      go1.21.6
	I0910 18:24:33.890505   42658 command_runner.go:130] > Compiler:       gc
	I0910 18:24:33.890510   42658 command_runner.go:130] > Platform:       linux/amd64
	I0910 18:24:33.890513   42658 command_runner.go:130] > Linkmode:       dynamic
	I0910 18:24:33.890517   42658 command_runner.go:130] > BuildTags:      
	I0910 18:24:33.890522   42658 command_runner.go:130] >   containers_image_ostree_stub
	I0910 18:24:33.890526   42658 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0910 18:24:33.890529   42658 command_runner.go:130] >   btrfs_noversion
	I0910 18:24:33.890534   42658 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0910 18:24:33.890538   42658 command_runner.go:130] >   libdm_no_deferred_remove
	I0910 18:24:33.890542   42658 command_runner.go:130] >   seccomp
	I0910 18:24:33.890545   42658 command_runner.go:130] > LDFlags:          unknown
	I0910 18:24:33.890556   42658 command_runner.go:130] > SeccompEnabled:   true
	I0910 18:24:33.890564   42658 command_runner.go:130] > AppArmorEnabled:  false
	I0910 18:24:33.894093   42658 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:24:33.895283   42658 main.go:141] libmachine: (multinode-925076) Calling .GetIP
	I0910 18:24:33.897859   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:33.898227   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:33.898249   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:33.898500   42658 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 18:24:33.902837   42658 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0910 18:24:33.902930   42658 kubeadm.go:883] updating cluster {Name:multinode-925076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-925076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:24:33.903078   42658 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:24:33.903134   42658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:24:33.948482   42658 command_runner.go:130] > {
	I0910 18:24:33.948503   42658 command_runner.go:130] >   "images": [
	I0910 18:24:33.948508   42658 command_runner.go:130] >     {
	I0910 18:24:33.948519   42658 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0910 18:24:33.948525   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.948532   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0910 18:24:33.948537   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948543   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.948555   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0910 18:24:33.948569   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0910 18:24:33.948578   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948586   42658 command_runner.go:130] >       "size": "87165492",
	I0910 18:24:33.948596   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.948605   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.948615   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.948627   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.948635   42658 command_runner.go:130] >     },
	I0910 18:24:33.948642   42658 command_runner.go:130] >     {
	I0910 18:24:33.948657   42658 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0910 18:24:33.948667   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.948679   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0910 18:24:33.948687   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948695   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.948711   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0910 18:24:33.948725   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0910 18:24:33.948734   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948742   42658 command_runner.go:130] >       "size": "87190579",
	I0910 18:24:33.948751   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.948765   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.948774   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.948782   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.948790   42658 command_runner.go:130] >     },
	I0910 18:24:33.948797   42658 command_runner.go:130] >     {
	I0910 18:24:33.948811   42658 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0910 18:24:33.948828   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.948840   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0910 18:24:33.948849   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948857   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.948879   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0910 18:24:33.948895   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0910 18:24:33.948904   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948913   42658 command_runner.go:130] >       "size": "1363676",
	I0910 18:24:33.948923   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.948933   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.948942   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.948952   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.948960   42658 command_runner.go:130] >     },
	I0910 18:24:33.948965   42658 command_runner.go:130] >     {
	I0910 18:24:33.948976   42658 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0910 18:24:33.948985   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.948994   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0910 18:24:33.949003   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949010   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949026   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0910 18:24:33.949049   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0910 18:24:33.949058   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949066   42658 command_runner.go:130] >       "size": "31470524",
	I0910 18:24:33.949087   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.949097   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949106   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949114   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949122   42658 command_runner.go:130] >     },
	I0910 18:24:33.949128   42658 command_runner.go:130] >     {
	I0910 18:24:33.949142   42658 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0910 18:24:33.949151   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949160   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0910 18:24:33.949169   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949177   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949192   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0910 18:24:33.949207   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0910 18:24:33.949222   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949233   42658 command_runner.go:130] >       "size": "61245718",
	I0910 18:24:33.949242   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.949250   42658 command_runner.go:130] >       "username": "nonroot",
	I0910 18:24:33.949269   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949279   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949287   42658 command_runner.go:130] >     },
	I0910 18:24:33.949295   42658 command_runner.go:130] >     {
	I0910 18:24:33.949306   42658 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0910 18:24:33.949316   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949324   42658 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0910 18:24:33.949333   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949345   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949360   42658 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0910 18:24:33.949378   42658 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0910 18:24:33.949386   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949394   42658 command_runner.go:130] >       "size": "149009664",
	I0910 18:24:33.949404   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949410   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.949414   42658 command_runner.go:130] >       },
	I0910 18:24:33.949420   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949425   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949432   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949438   42658 command_runner.go:130] >     },
	I0910 18:24:33.949444   42658 command_runner.go:130] >     {
	I0910 18:24:33.949453   42658 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0910 18:24:33.949460   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949465   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0910 18:24:33.949472   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949476   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949483   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0910 18:24:33.949491   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0910 18:24:33.949494   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949503   42658 command_runner.go:130] >       "size": "95233506",
	I0910 18:24:33.949508   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949515   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.949527   42658 command_runner.go:130] >       },
	I0910 18:24:33.949539   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949545   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949552   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949557   42658 command_runner.go:130] >     },
	I0910 18:24:33.949562   42658 command_runner.go:130] >     {
	I0910 18:24:33.949573   42658 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0910 18:24:33.949582   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949590   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0910 18:24:33.949599   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949606   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949641   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0910 18:24:33.949657   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0910 18:24:33.949663   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949671   42658 command_runner.go:130] >       "size": "89437512",
	I0910 18:24:33.949677   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949686   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.949692   42658 command_runner.go:130] >       },
	I0910 18:24:33.949699   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949705   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949711   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949715   42658 command_runner.go:130] >     },
	I0910 18:24:33.949721   42658 command_runner.go:130] >     {
	I0910 18:24:33.949730   42658 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0910 18:24:33.949736   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949744   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0910 18:24:33.949750   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949757   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949772   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0910 18:24:33.949780   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0910 18:24:33.949783   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949787   42658 command_runner.go:130] >       "size": "92728217",
	I0910 18:24:33.949791   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.949794   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949798   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949801   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949809   42658 command_runner.go:130] >     },
	I0910 18:24:33.949812   42658 command_runner.go:130] >     {
	I0910 18:24:33.949817   42658 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0910 18:24:33.949821   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949826   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0910 18:24:33.949829   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949833   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949840   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0910 18:24:33.949848   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0910 18:24:33.949851   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949855   42658 command_runner.go:130] >       "size": "68420936",
	I0910 18:24:33.949859   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949863   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.949866   42658 command_runner.go:130] >       },
	I0910 18:24:33.949870   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949879   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949885   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949889   42658 command_runner.go:130] >     },
	I0910 18:24:33.949892   42658 command_runner.go:130] >     {
	I0910 18:24:33.949898   42658 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0910 18:24:33.949904   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949908   42658 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0910 18:24:33.949914   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949918   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949925   42658 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0910 18:24:33.949932   42658 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0910 18:24:33.949937   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949941   42658 command_runner.go:130] >       "size": "742080",
	I0910 18:24:33.949945   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949951   42658 command_runner.go:130] >         "value": "65535"
	I0910 18:24:33.949954   42658 command_runner.go:130] >       },
	I0910 18:24:33.949958   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949962   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949966   42658 command_runner.go:130] >       "pinned": true
	I0910 18:24:33.949969   42658 command_runner.go:130] >     }
	I0910 18:24:33.949972   42658 command_runner.go:130] >   ]
	I0910 18:24:33.949980   42658 command_runner.go:130] > }
	I0910 18:24:33.950168   42658 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:24:33.950179   42658 crio.go:433] Images already preloaded, skipping extraction
	I0910 18:24:33.950226   42658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:24:33.988606   42658 command_runner.go:130] > {
	I0910 18:24:33.988631   42658 command_runner.go:130] >   "images": [
	I0910 18:24:33.988637   42658 command_runner.go:130] >     {
	I0910 18:24:33.988646   42658 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0910 18:24:33.988651   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.988660   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0910 18:24:33.988666   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988672   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.988699   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0910 18:24:33.988711   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0910 18:24:33.988717   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988723   42658 command_runner.go:130] >       "size": "87165492",
	I0910 18:24:33.988729   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.988739   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.988750   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.988760   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.988766   42658 command_runner.go:130] >     },
	I0910 18:24:33.988770   42658 command_runner.go:130] >     {
	I0910 18:24:33.988780   42658 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0910 18:24:33.988787   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.988795   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0910 18:24:33.988801   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988808   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.988815   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0910 18:24:33.988823   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0910 18:24:33.988827   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988831   42658 command_runner.go:130] >       "size": "87190579",
	I0910 18:24:33.988835   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.988845   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.988851   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.988855   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.988863   42658 command_runner.go:130] >     },
	I0910 18:24:33.988872   42658 command_runner.go:130] >     {
	I0910 18:24:33.988878   42658 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0910 18:24:33.988884   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.988889   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0910 18:24:33.988892   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988896   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.988903   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0910 18:24:33.988911   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0910 18:24:33.988914   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988918   42658 command_runner.go:130] >       "size": "1363676",
	I0910 18:24:33.988922   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.988926   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.988932   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.988937   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.988940   42658 command_runner.go:130] >     },
	I0910 18:24:33.988943   42658 command_runner.go:130] >     {
	I0910 18:24:33.988949   42658 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0910 18:24:33.988955   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.988960   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0910 18:24:33.988966   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988970   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.988977   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0910 18:24:33.988990   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0910 18:24:33.988997   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989001   42658 command_runner.go:130] >       "size": "31470524",
	I0910 18:24:33.989004   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.989008   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989012   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989016   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989019   42658 command_runner.go:130] >     },
	I0910 18:24:33.989023   42658 command_runner.go:130] >     {
	I0910 18:24:33.989029   42658 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0910 18:24:33.989035   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989040   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0910 18:24:33.989043   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989057   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989066   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0910 18:24:33.989094   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0910 18:24:33.989103   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989107   42658 command_runner.go:130] >       "size": "61245718",
	I0910 18:24:33.989111   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.989115   42658 command_runner.go:130] >       "username": "nonroot",
	I0910 18:24:33.989119   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989122   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989126   42658 command_runner.go:130] >     },
	I0910 18:24:33.989129   42658 command_runner.go:130] >     {
	I0910 18:24:33.989135   42658 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0910 18:24:33.989144   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989149   42658 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0910 18:24:33.989154   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989158   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989167   42658 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0910 18:24:33.989173   42658 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0910 18:24:33.989179   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989182   42658 command_runner.go:130] >       "size": "149009664",
	I0910 18:24:33.989186   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989193   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.989199   42658 command_runner.go:130] >       },
	I0910 18:24:33.989203   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989207   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989211   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989214   42658 command_runner.go:130] >     },
	I0910 18:24:33.989217   42658 command_runner.go:130] >     {
	I0910 18:24:33.989223   42658 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0910 18:24:33.989229   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989233   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0910 18:24:33.989237   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989241   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989247   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0910 18:24:33.989256   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0910 18:24:33.989259   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989270   42658 command_runner.go:130] >       "size": "95233506",
	I0910 18:24:33.989276   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989279   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.989283   42658 command_runner.go:130] >       },
	I0910 18:24:33.989286   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989290   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989294   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989297   42658 command_runner.go:130] >     },
	I0910 18:24:33.989303   42658 command_runner.go:130] >     {
	I0910 18:24:33.989308   42658 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0910 18:24:33.989314   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989319   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0910 18:24:33.989323   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989326   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989348   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0910 18:24:33.989358   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0910 18:24:33.989362   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989366   42658 command_runner.go:130] >       "size": "89437512",
	I0910 18:24:33.989370   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989374   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.989377   42658 command_runner.go:130] >       },
	I0910 18:24:33.989381   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989385   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989389   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989392   42658 command_runner.go:130] >     },
	I0910 18:24:33.989395   42658 command_runner.go:130] >     {
	I0910 18:24:33.989401   42658 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0910 18:24:33.989407   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989412   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0910 18:24:33.989415   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989419   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989425   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0910 18:24:33.989437   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0910 18:24:33.989440   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989445   42658 command_runner.go:130] >       "size": "92728217",
	I0910 18:24:33.989450   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.989459   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989465   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989469   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989473   42658 command_runner.go:130] >     },
	I0910 18:24:33.989476   42658 command_runner.go:130] >     {
	I0910 18:24:33.989482   42658 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0910 18:24:33.989486   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989491   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0910 18:24:33.989494   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989497   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989504   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0910 18:24:33.989513   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0910 18:24:33.989519   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989524   42658 command_runner.go:130] >       "size": "68420936",
	I0910 18:24:33.989528   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989532   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.989535   42658 command_runner.go:130] >       },
	I0910 18:24:33.989540   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989545   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989549   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989553   42658 command_runner.go:130] >     },
	I0910 18:24:33.989556   42658 command_runner.go:130] >     {
	I0910 18:24:33.989562   42658 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0910 18:24:33.989568   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989573   42658 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0910 18:24:33.989576   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989579   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989586   42658 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0910 18:24:33.989593   42658 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0910 18:24:33.989603   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989607   42658 command_runner.go:130] >       "size": "742080",
	I0910 18:24:33.989611   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989615   42658 command_runner.go:130] >         "value": "65535"
	I0910 18:24:33.989618   42658 command_runner.go:130] >       },
	I0910 18:24:33.989622   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989628   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989636   42658 command_runner.go:130] >       "pinned": true
	I0910 18:24:33.989641   42658 command_runner.go:130] >     }
	I0910 18:24:33.989645   42658 command_runner.go:130] >   ]
	I0910 18:24:33.989648   42658 command_runner.go:130] > }
	I0910 18:24:33.989759   42658 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:24:33.989769   42658 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:24:33.989775   42658 kubeadm.go:934] updating node { 192.168.39.248 8443 v1.31.0 crio true true} ...
	I0910 18:24:33.989898   42658 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-925076 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-925076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:24:33.989982   42658 ssh_runner.go:195] Run: crio config
	I0910 18:24:34.023136   42658 command_runner.go:130] ! time="2024-09-10 18:24:33.995732341Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0910 18:24:34.029958   42658 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0910 18:24:34.034778   42658 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0910 18:24:34.034802   42658 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0910 18:24:34.034813   42658 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0910 18:24:34.034819   42658 command_runner.go:130] > #
	I0910 18:24:34.034830   42658 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0910 18:24:34.034840   42658 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0910 18:24:34.034850   42658 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0910 18:24:34.034864   42658 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0910 18:24:34.034870   42658 command_runner.go:130] > # reload'.
	I0910 18:24:34.034880   42658 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0910 18:24:34.034892   42658 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0910 18:24:34.034901   42658 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0910 18:24:34.034910   42658 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0910 18:24:34.034919   42658 command_runner.go:130] > [crio]
	I0910 18:24:34.034928   42658 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0910 18:24:34.034938   42658 command_runner.go:130] > # containers images, in this directory.
	I0910 18:24:34.034958   42658 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0910 18:24:34.034976   42658 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0910 18:24:34.034985   42658 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0910 18:24:34.034998   42658 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0910 18:24:34.035007   42658 command_runner.go:130] > # imagestore = ""
	I0910 18:24:34.035017   42658 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0910 18:24:34.035028   42658 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0910 18:24:34.035038   42658 command_runner.go:130] > storage_driver = "overlay"
	I0910 18:24:34.035049   42658 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0910 18:24:34.035060   42658 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0910 18:24:34.035069   42658 command_runner.go:130] > storage_option = [
	I0910 18:24:34.035076   42658 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0910 18:24:34.035079   42658 command_runner.go:130] > ]
	I0910 18:24:34.035087   42658 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0910 18:24:34.035093   42658 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0910 18:24:34.035104   42658 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0910 18:24:34.035121   42658 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0910 18:24:34.035134   42658 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0910 18:24:34.035140   42658 command_runner.go:130] > # always happen on a node reboot
	I0910 18:24:34.035145   42658 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0910 18:24:34.035159   42658 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0910 18:24:34.035166   42658 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0910 18:24:34.035173   42658 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0910 18:24:34.035178   42658 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0910 18:24:34.035185   42658 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0910 18:24:34.035196   42658 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0910 18:24:34.035200   42658 command_runner.go:130] > # internal_wipe = true
	I0910 18:24:34.035207   42658 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0910 18:24:34.035214   42658 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0910 18:24:34.035218   42658 command_runner.go:130] > # internal_repair = false
	I0910 18:24:34.035225   42658 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0910 18:24:34.035231   42658 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0910 18:24:34.035236   42658 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0910 18:24:34.035241   42658 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0910 18:24:34.035247   42658 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0910 18:24:34.035253   42658 command_runner.go:130] > [crio.api]
	I0910 18:24:34.035265   42658 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0910 18:24:34.035272   42658 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0910 18:24:34.035277   42658 command_runner.go:130] > # IP address on which the stream server will listen.
	I0910 18:24:34.035284   42658 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0910 18:24:34.035290   42658 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0910 18:24:34.035297   42658 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0910 18:24:34.035306   42658 command_runner.go:130] > # stream_port = "0"
	I0910 18:24:34.035313   42658 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0910 18:24:34.035317   42658 command_runner.go:130] > # stream_enable_tls = false
	I0910 18:24:34.035322   42658 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0910 18:24:34.035329   42658 command_runner.go:130] > # stream_idle_timeout = ""
	I0910 18:24:34.035340   42658 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0910 18:24:34.035348   42658 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0910 18:24:34.035352   42658 command_runner.go:130] > # minutes.
	I0910 18:24:34.035356   42658 command_runner.go:130] > # stream_tls_cert = ""
	I0910 18:24:34.035362   42658 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0910 18:24:34.035370   42658 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0910 18:24:34.035374   42658 command_runner.go:130] > # stream_tls_key = ""
	I0910 18:24:34.035381   42658 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0910 18:24:34.035387   42658 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0910 18:24:34.035407   42658 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0910 18:24:34.035413   42658 command_runner.go:130] > # stream_tls_ca = ""
	I0910 18:24:34.035420   42658 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0910 18:24:34.035427   42658 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0910 18:24:34.035434   42658 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0910 18:24:34.035440   42658 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0910 18:24:34.035446   42658 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0910 18:24:34.035453   42658 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0910 18:24:34.035457   42658 command_runner.go:130] > [crio.runtime]
	I0910 18:24:34.035465   42658 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0910 18:24:34.035470   42658 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0910 18:24:34.035476   42658 command_runner.go:130] > # "nofile=1024:2048"
	I0910 18:24:34.035481   42658 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0910 18:24:34.035486   42658 command_runner.go:130] > # default_ulimits = [
	I0910 18:24:34.035489   42658 command_runner.go:130] > # ]
	I0910 18:24:34.035494   42658 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0910 18:24:34.035504   42658 command_runner.go:130] > # no_pivot = false
	I0910 18:24:34.035510   42658 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0910 18:24:34.035516   42658 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0910 18:24:34.035522   42658 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0910 18:24:34.035528   42658 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0910 18:24:34.035535   42658 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0910 18:24:34.035541   42658 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0910 18:24:34.035547   42658 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0910 18:24:34.035553   42658 command_runner.go:130] > # Cgroup setting for conmon
	I0910 18:24:34.035561   42658 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0910 18:24:34.035565   42658 command_runner.go:130] > conmon_cgroup = "pod"
	I0910 18:24:34.035572   42658 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0910 18:24:34.035577   42658 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0910 18:24:34.035587   42658 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0910 18:24:34.035591   42658 command_runner.go:130] > conmon_env = [
	I0910 18:24:34.035599   42658 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0910 18:24:34.035601   42658 command_runner.go:130] > ]
	I0910 18:24:34.035608   42658 command_runner.go:130] > # Additional environment variables to set for all the
	I0910 18:24:34.035615   42658 command_runner.go:130] > # containers. These are overridden if set in the
	I0910 18:24:34.035621   42658 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0910 18:24:34.035625   42658 command_runner.go:130] > # default_env = [
	I0910 18:24:34.035628   42658 command_runner.go:130] > # ]
	I0910 18:24:34.035633   42658 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0910 18:24:34.035639   42658 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0910 18:24:34.035643   42658 command_runner.go:130] > # selinux = false
	I0910 18:24:34.035648   42658 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0910 18:24:34.035653   42658 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0910 18:24:34.035658   42658 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0910 18:24:34.035662   42658 command_runner.go:130] > # seccomp_profile = ""
	I0910 18:24:34.035666   42658 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0910 18:24:34.035671   42658 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0910 18:24:34.035677   42658 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0910 18:24:34.035681   42658 command_runner.go:130] > # which might increase security.
	I0910 18:24:34.035684   42658 command_runner.go:130] > # This option is currently deprecated,
	I0910 18:24:34.035690   42658 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0910 18:24:34.035694   42658 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0910 18:24:34.035704   42658 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0910 18:24:34.035712   42658 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0910 18:24:34.035718   42658 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0910 18:24:34.035725   42658 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0910 18:24:34.035729   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.035736   42658 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0910 18:24:34.035741   42658 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0910 18:24:34.035748   42658 command_runner.go:130] > # the cgroup blockio controller.
	I0910 18:24:34.035752   42658 command_runner.go:130] > # blockio_config_file = ""
	I0910 18:24:34.035761   42658 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0910 18:24:34.035765   42658 command_runner.go:130] > # blockio parameters.
	I0910 18:24:34.035770   42658 command_runner.go:130] > # blockio_reload = false
	I0910 18:24:34.035777   42658 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0910 18:24:34.035782   42658 command_runner.go:130] > # irqbalance daemon.
	I0910 18:24:34.035787   42658 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0910 18:24:34.035796   42658 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0910 18:24:34.035802   42658 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0910 18:24:34.035809   42658 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0910 18:24:34.035814   42658 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0910 18:24:34.035821   42658 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0910 18:24:34.035826   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.035833   42658 command_runner.go:130] > # rdt_config_file = ""
	I0910 18:24:34.035838   42658 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0910 18:24:34.035844   42658 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0910 18:24:34.035878   42658 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0910 18:24:34.035889   42658 command_runner.go:130] > # separate_pull_cgroup = ""
	I0910 18:24:34.035898   42658 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0910 18:24:34.035906   42658 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0910 18:24:34.035915   42658 command_runner.go:130] > # will be added.
	I0910 18:24:34.035922   42658 command_runner.go:130] > # default_capabilities = [
	I0910 18:24:34.035930   42658 command_runner.go:130] > # 	"CHOWN",
	I0910 18:24:34.035936   42658 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0910 18:24:34.035943   42658 command_runner.go:130] > # 	"FSETID",
	I0910 18:24:34.035949   42658 command_runner.go:130] > # 	"FOWNER",
	I0910 18:24:34.035957   42658 command_runner.go:130] > # 	"SETGID",
	I0910 18:24:34.035963   42658 command_runner.go:130] > # 	"SETUID",
	I0910 18:24:34.035975   42658 command_runner.go:130] > # 	"SETPCAP",
	I0910 18:24:34.035984   42658 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0910 18:24:34.035987   42658 command_runner.go:130] > # 	"KILL",
	I0910 18:24:34.035990   42658 command_runner.go:130] > # ]
	I0910 18:24:34.035997   42658 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0910 18:24:34.036006   42658 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0910 18:24:34.036010   42658 command_runner.go:130] > # add_inheritable_capabilities = false
	I0910 18:24:34.036019   42658 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0910 18:24:34.036024   42658 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0910 18:24:34.036030   42658 command_runner.go:130] > default_sysctls = [
	I0910 18:24:34.036035   42658 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0910 18:24:34.036040   42658 command_runner.go:130] > ]
	I0910 18:24:34.036044   42658 command_runner.go:130] > # List of devices on the host that a
	I0910 18:24:34.036052   42658 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0910 18:24:34.036056   42658 command_runner.go:130] > # allowed_devices = [
	I0910 18:24:34.036061   42658 command_runner.go:130] > # 	"/dev/fuse",
	I0910 18:24:34.036064   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036071   42658 command_runner.go:130] > # List of additional devices. specified as
	I0910 18:24:34.036077   42658 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0910 18:24:34.036084   42658 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0910 18:24:34.036092   42658 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0910 18:24:34.036098   42658 command_runner.go:130] > # additional_devices = [
	I0910 18:24:34.036101   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036106   42658 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0910 18:24:34.036112   42658 command_runner.go:130] > # cdi_spec_dirs = [
	I0910 18:24:34.036115   42658 command_runner.go:130] > # 	"/etc/cdi",
	I0910 18:24:34.036119   42658 command_runner.go:130] > # 	"/var/run/cdi",
	I0910 18:24:34.036122   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036128   42658 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0910 18:24:34.036136   42658 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0910 18:24:34.036141   42658 command_runner.go:130] > # Defaults to false.
	I0910 18:24:34.036148   42658 command_runner.go:130] > # device_ownership_from_security_context = false
	I0910 18:24:34.036154   42658 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0910 18:24:34.036162   42658 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0910 18:24:34.036165   42658 command_runner.go:130] > # hooks_dir = [
	I0910 18:24:34.036169   42658 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0910 18:24:34.036181   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036189   42658 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0910 18:24:34.036195   42658 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0910 18:24:34.036201   42658 command_runner.go:130] > # its default mounts from the following two files:
	I0910 18:24:34.036205   42658 command_runner.go:130] > #
	I0910 18:24:34.036211   42658 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0910 18:24:34.036218   42658 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0910 18:24:34.036223   42658 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0910 18:24:34.036228   42658 command_runner.go:130] > #
	I0910 18:24:34.036234   42658 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0910 18:24:34.036242   42658 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0910 18:24:34.036248   42658 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0910 18:24:34.036255   42658 command_runner.go:130] > #      only add mounts it finds in this file.
	I0910 18:24:34.036258   42658 command_runner.go:130] > #
	I0910 18:24:34.036262   42658 command_runner.go:130] > # default_mounts_file = ""
	I0910 18:24:34.036266   42658 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0910 18:24:34.036273   42658 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0910 18:24:34.036279   42658 command_runner.go:130] > pids_limit = 1024
	I0910 18:24:34.036284   42658 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0910 18:24:34.036292   42658 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0910 18:24:34.036298   42658 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0910 18:24:34.036311   42658 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0910 18:24:34.036315   42658 command_runner.go:130] > # log_size_max = -1
	I0910 18:24:34.036323   42658 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0910 18:24:34.036331   42658 command_runner.go:130] > # log_to_journald = false
	I0910 18:24:34.036337   42658 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0910 18:24:34.036343   42658 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0910 18:24:34.036349   42658 command_runner.go:130] > # Path to directory for container attach sockets.
	I0910 18:24:34.036354   42658 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0910 18:24:34.036359   42658 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0910 18:24:34.036365   42658 command_runner.go:130] > # bind_mount_prefix = ""
	I0910 18:24:34.036370   42658 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0910 18:24:34.036376   42658 command_runner.go:130] > # read_only = false
	I0910 18:24:34.036382   42658 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0910 18:24:34.036388   42658 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0910 18:24:34.036392   42658 command_runner.go:130] > # live configuration reload.
	I0910 18:24:34.036401   42658 command_runner.go:130] > # log_level = "info"
	I0910 18:24:34.036409   42658 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0910 18:24:34.036414   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.036418   42658 command_runner.go:130] > # log_filter = ""
	I0910 18:24:34.036423   42658 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0910 18:24:34.036434   42658 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0910 18:24:34.036440   42658 command_runner.go:130] > # separated by comma.
	I0910 18:24:34.036447   42658 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0910 18:24:34.036454   42658 command_runner.go:130] > # uid_mappings = ""
	I0910 18:24:34.036459   42658 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0910 18:24:34.036467   42658 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0910 18:24:34.036471   42658 command_runner.go:130] > # separated by comma.
	I0910 18:24:34.036480   42658 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0910 18:24:34.036484   42658 command_runner.go:130] > # gid_mappings = ""
	I0910 18:24:34.036490   42658 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0910 18:24:34.036498   42658 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0910 18:24:34.036504   42658 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0910 18:24:34.036513   42658 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0910 18:24:34.036517   42658 command_runner.go:130] > # minimum_mappable_uid = -1
	I0910 18:24:34.036523   42658 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0910 18:24:34.036530   42658 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0910 18:24:34.036536   42658 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0910 18:24:34.036545   42658 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0910 18:24:34.036552   42658 command_runner.go:130] > # minimum_mappable_gid = -1
	I0910 18:24:34.036561   42658 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0910 18:24:34.036567   42658 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0910 18:24:34.036574   42658 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0910 18:24:34.036578   42658 command_runner.go:130] > # ctr_stop_timeout = 30
	I0910 18:24:34.036585   42658 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0910 18:24:34.036591   42658 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0910 18:24:34.036596   42658 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0910 18:24:34.036603   42658 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0910 18:24:34.036607   42658 command_runner.go:130] > drop_infra_ctr = false
	I0910 18:24:34.036612   42658 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0910 18:24:34.036620   42658 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0910 18:24:34.036627   42658 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0910 18:24:34.036637   42658 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0910 18:24:34.036645   42658 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0910 18:24:34.036653   42658 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0910 18:24:34.036658   42658 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0910 18:24:34.036665   42658 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0910 18:24:34.036668   42658 command_runner.go:130] > # shared_cpuset = ""
	I0910 18:24:34.036674   42658 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0910 18:24:34.036680   42658 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0910 18:24:34.036685   42658 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0910 18:24:34.036691   42658 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0910 18:24:34.036698   42658 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0910 18:24:34.036704   42658 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0910 18:24:34.036712   42658 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0910 18:24:34.036716   42658 command_runner.go:130] > # enable_criu_support = false
	I0910 18:24:34.036720   42658 command_runner.go:130] > # Enable/disable the generation of the container,
	I0910 18:24:34.036733   42658 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0910 18:24:34.036739   42658 command_runner.go:130] > # enable_pod_events = false
	I0910 18:24:34.036750   42658 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0910 18:24:34.036758   42658 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0910 18:24:34.036763   42658 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0910 18:24:34.036770   42658 command_runner.go:130] > # default_runtime = "runc"
	I0910 18:24:34.036775   42658 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0910 18:24:34.036784   42658 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0910 18:24:34.036794   42658 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0910 18:24:34.036803   42658 command_runner.go:130] > # creation as a file is not desired either.
	I0910 18:24:34.036813   42658 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0910 18:24:34.036820   42658 command_runner.go:130] > # the hostname is being managed dynamically.
	I0910 18:24:34.036824   42658 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0910 18:24:34.036830   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036836   42658 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0910 18:24:34.036844   42658 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0910 18:24:34.036850   42658 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0910 18:24:34.036859   42658 command_runner.go:130] > # Each entry in the table should follow the format:
	I0910 18:24:34.036863   42658 command_runner.go:130] > #
	I0910 18:24:34.036870   42658 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0910 18:24:34.036880   42658 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0910 18:24:34.036940   42658 command_runner.go:130] > # runtime_type = "oci"
	I0910 18:24:34.036950   42658 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0910 18:24:34.036955   42658 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0910 18:24:34.036959   42658 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0910 18:24:34.036963   42658 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0910 18:24:34.036967   42658 command_runner.go:130] > # monitor_env = []
	I0910 18:24:34.036971   42658 command_runner.go:130] > # privileged_without_host_devices = false
	I0910 18:24:34.036978   42658 command_runner.go:130] > # allowed_annotations = []
	I0910 18:24:34.036983   42658 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0910 18:24:34.036989   42658 command_runner.go:130] > # Where:
	I0910 18:24:34.036994   42658 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0910 18:24:34.037002   42658 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0910 18:24:34.037008   42658 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0910 18:24:34.037016   42658 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0910 18:24:34.037020   42658 command_runner.go:130] > #   in $PATH.
	I0910 18:24:34.037026   42658 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0910 18:24:34.037033   42658 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0910 18:24:34.037039   42658 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0910 18:24:34.037044   42658 command_runner.go:130] > #   state.
	I0910 18:24:34.037050   42658 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0910 18:24:34.037057   42658 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0910 18:24:34.037063   42658 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0910 18:24:34.037068   42658 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0910 18:24:34.037086   42658 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0910 18:24:34.037100   42658 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0910 18:24:34.037112   42658 command_runner.go:130] > #   The currently recognized values are:
	I0910 18:24:34.037121   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0910 18:24:34.037128   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0910 18:24:34.037136   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0910 18:24:34.037142   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0910 18:24:34.037151   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0910 18:24:34.037157   42658 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0910 18:24:34.037165   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0910 18:24:34.037171   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0910 18:24:34.037179   42658 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0910 18:24:34.037185   42658 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0910 18:24:34.037197   42658 command_runner.go:130] > #   deprecated option "conmon".
	I0910 18:24:34.037206   42658 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0910 18:24:34.037211   42658 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0910 18:24:34.037221   42658 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0910 18:24:34.037228   42658 command_runner.go:130] > #   should be moved to the container's cgroup
	I0910 18:24:34.037234   42658 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0910 18:24:34.037242   42658 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0910 18:24:34.037248   42658 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0910 18:24:34.037257   42658 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0910 18:24:34.037262   42658 command_runner.go:130] > #
	I0910 18:24:34.037269   42658 command_runner.go:130] > # Using the seccomp notifier feature:
	I0910 18:24:34.037276   42658 command_runner.go:130] > #
	I0910 18:24:34.037286   42658 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0910 18:24:34.037297   42658 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0910 18:24:34.037309   42658 command_runner.go:130] > #
	I0910 18:24:34.037318   42658 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0910 18:24:34.037330   42658 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0910 18:24:34.037337   42658 command_runner.go:130] > #
	I0910 18:24:34.037346   42658 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0910 18:24:34.037355   42658 command_runner.go:130] > # feature.
	I0910 18:24:34.037361   42658 command_runner.go:130] > #
	I0910 18:24:34.037372   42658 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0910 18:24:34.037382   42658 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0910 18:24:34.037392   42658 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0910 18:24:34.037406   42658 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0910 18:24:34.037418   42658 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0910 18:24:34.037425   42658 command_runner.go:130] > #
	I0910 18:24:34.037435   42658 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0910 18:24:34.037447   42658 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0910 18:24:34.037452   42658 command_runner.go:130] > #
	I0910 18:24:34.037463   42658 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0910 18:24:34.037474   42658 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0910 18:24:34.037481   42658 command_runner.go:130] > #
	I0910 18:24:34.037491   42658 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0910 18:24:34.037503   42658 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0910 18:24:34.037511   42658 command_runner.go:130] > # limitation.
	I0910 18:24:34.037526   42658 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0910 18:24:34.037536   42658 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0910 18:24:34.037542   42658 command_runner.go:130] > runtime_type = "oci"
	I0910 18:24:34.037549   42658 command_runner.go:130] > runtime_root = "/run/runc"
	I0910 18:24:34.037558   42658 command_runner.go:130] > runtime_config_path = ""
	I0910 18:24:34.037565   42658 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0910 18:24:34.037574   42658 command_runner.go:130] > monitor_cgroup = "pod"
	I0910 18:24:34.037581   42658 command_runner.go:130] > monitor_exec_cgroup = ""
	I0910 18:24:34.037590   42658 command_runner.go:130] > monitor_env = [
	I0910 18:24:34.037598   42658 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0910 18:24:34.037605   42658 command_runner.go:130] > ]
	I0910 18:24:34.037614   42658 command_runner.go:130] > privileged_without_host_devices = false
	I0910 18:24:34.037627   42658 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0910 18:24:34.037637   42658 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0910 18:24:34.037652   42658 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0910 18:24:34.037666   42658 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0910 18:24:34.037679   42658 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0910 18:24:34.037691   42658 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0910 18:24:34.037708   42658 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0910 18:24:34.037722   42658 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0910 18:24:34.037730   42658 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0910 18:24:34.037741   42658 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0910 18:24:34.037746   42658 command_runner.go:130] > # Example:
	I0910 18:24:34.037753   42658 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0910 18:24:34.037760   42658 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0910 18:24:34.037770   42658 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0910 18:24:34.037778   42658 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0910 18:24:34.037782   42658 command_runner.go:130] > # cpuset = 0
	I0910 18:24:34.037788   42658 command_runner.go:130] > # cpushares = "0-1"
	I0910 18:24:34.037793   42658 command_runner.go:130] > # Where:
	I0910 18:24:34.037800   42658 command_runner.go:130] > # The workload name is workload-type.
	I0910 18:24:34.037810   42658 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0910 18:24:34.037819   42658 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0910 18:24:34.037827   42658 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0910 18:24:34.037839   42658 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0910 18:24:34.037847   42658 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0910 18:24:34.037860   42658 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0910 18:24:34.037870   42658 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0910 18:24:34.037877   42658 command_runner.go:130] > # Default value is set to true
	I0910 18:24:34.037883   42658 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0910 18:24:34.037890   42658 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0910 18:24:34.037897   42658 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0910 18:24:34.037904   42658 command_runner.go:130] > # Default value is set to 'false'
	I0910 18:24:34.037910   42658 command_runner.go:130] > # disable_hostport_mapping = false
	I0910 18:24:34.037919   42658 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0910 18:24:34.037927   42658 command_runner.go:130] > #
	I0910 18:24:34.037936   42658 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0910 18:24:34.037947   42658 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0910 18:24:34.037960   42658 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0910 18:24:34.037973   42658 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0910 18:24:34.037984   42658 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0910 18:24:34.037992   42658 command_runner.go:130] > [crio.image]
	I0910 18:24:34.038001   42658 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0910 18:24:34.038011   42658 command_runner.go:130] > # default_transport = "docker://"
	I0910 18:24:34.038020   42658 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0910 18:24:34.038032   42658 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0910 18:24:34.038042   42658 command_runner.go:130] > # global_auth_file = ""
	I0910 18:24:34.038050   42658 command_runner.go:130] > # The image used to instantiate infra containers.
	I0910 18:24:34.038060   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.038067   42658 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0910 18:24:34.038080   42658 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0910 18:24:34.038091   42658 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0910 18:24:34.038098   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.038111   42658 command_runner.go:130] > # pause_image_auth_file = ""
	I0910 18:24:34.038123   42658 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0910 18:24:34.038135   42658 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0910 18:24:34.038147   42658 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0910 18:24:34.038156   42658 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0910 18:24:34.038165   42658 command_runner.go:130] > # pause_command = "/pause"
	I0910 18:24:34.038176   42658 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0910 18:24:34.038188   42658 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0910 18:24:34.038197   42658 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0910 18:24:34.038217   42658 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0910 18:24:34.038229   42658 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0910 18:24:34.038240   42658 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0910 18:24:34.038248   42658 command_runner.go:130] > # pinned_images = [
	I0910 18:24:34.038252   42658 command_runner.go:130] > # ]
	I0910 18:24:34.038257   42658 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0910 18:24:34.038263   42658 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0910 18:24:34.038270   42658 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0910 18:24:34.038275   42658 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0910 18:24:34.038281   42658 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0910 18:24:34.038285   42658 command_runner.go:130] > # signature_policy = ""
	I0910 18:24:34.038292   42658 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0910 18:24:34.038303   42658 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0910 18:24:34.038311   42658 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0910 18:24:34.038317   42658 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0910 18:24:34.038324   42658 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0910 18:24:34.038329   42658 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0910 18:24:34.038337   42658 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0910 18:24:34.038343   42658 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0910 18:24:34.038349   42658 command_runner.go:130] > # changing them here.
	I0910 18:24:34.038353   42658 command_runner.go:130] > # insecure_registries = [
	I0910 18:24:34.038356   42658 command_runner.go:130] > # ]
	I0910 18:24:34.038362   42658 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0910 18:24:34.038369   42658 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0910 18:24:34.038373   42658 command_runner.go:130] > # image_volumes = "mkdir"
	I0910 18:24:34.038379   42658 command_runner.go:130] > # Temporary directory to use for storing big files
	I0910 18:24:34.038385   42658 command_runner.go:130] > # big_files_temporary_dir = ""
	I0910 18:24:34.038393   42658 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0910 18:24:34.038399   42658 command_runner.go:130] > # CNI plugins.
	I0910 18:24:34.038402   42658 command_runner.go:130] > [crio.network]
	I0910 18:24:34.038408   42658 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0910 18:24:34.038415   42658 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0910 18:24:34.038419   42658 command_runner.go:130] > # cni_default_network = ""
	I0910 18:24:34.038425   42658 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0910 18:24:34.038430   42658 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0910 18:24:34.038437   42658 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0910 18:24:34.038451   42658 command_runner.go:130] > # plugin_dirs = [
	I0910 18:24:34.038457   42658 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0910 18:24:34.038460   42658 command_runner.go:130] > # ]
	I0910 18:24:34.038466   42658 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0910 18:24:34.038471   42658 command_runner.go:130] > [crio.metrics]
	I0910 18:24:34.038475   42658 command_runner.go:130] > # Globally enable or disable metrics support.
	I0910 18:24:34.038479   42658 command_runner.go:130] > enable_metrics = true
	I0910 18:24:34.038486   42658 command_runner.go:130] > # Specify enabled metrics collectors.
	I0910 18:24:34.038491   42658 command_runner.go:130] > # Per default all metrics are enabled.
	I0910 18:24:34.038500   42658 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0910 18:24:34.038506   42658 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0910 18:24:34.038513   42658 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0910 18:24:34.038517   42658 command_runner.go:130] > # metrics_collectors = [
	I0910 18:24:34.038523   42658 command_runner.go:130] > # 	"operations",
	I0910 18:24:34.038528   42658 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0910 18:24:34.038532   42658 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0910 18:24:34.038536   42658 command_runner.go:130] > # 	"operations_errors",
	I0910 18:24:34.038540   42658 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0910 18:24:34.038546   42658 command_runner.go:130] > # 	"image_pulls_by_name",
	I0910 18:24:34.038551   42658 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0910 18:24:34.038558   42658 command_runner.go:130] > # 	"image_pulls_failures",
	I0910 18:24:34.038562   42658 command_runner.go:130] > # 	"image_pulls_successes",
	I0910 18:24:34.038568   42658 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0910 18:24:34.038572   42658 command_runner.go:130] > # 	"image_layer_reuse",
	I0910 18:24:34.038576   42658 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0910 18:24:34.038580   42658 command_runner.go:130] > # 	"containers_oom_total",
	I0910 18:24:34.038584   42658 command_runner.go:130] > # 	"containers_oom",
	I0910 18:24:34.038588   42658 command_runner.go:130] > # 	"processes_defunct",
	I0910 18:24:34.038592   42658 command_runner.go:130] > # 	"operations_total",
	I0910 18:24:34.038596   42658 command_runner.go:130] > # 	"operations_latency_seconds",
	I0910 18:24:34.038603   42658 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0910 18:24:34.038607   42658 command_runner.go:130] > # 	"operations_errors_total",
	I0910 18:24:34.038614   42658 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0910 18:24:34.038618   42658 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0910 18:24:34.038622   42658 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0910 18:24:34.038626   42658 command_runner.go:130] > # 	"image_pulls_success_total",
	I0910 18:24:34.038636   42658 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0910 18:24:34.038643   42658 command_runner.go:130] > # 	"containers_oom_count_total",
	I0910 18:24:34.038647   42658 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0910 18:24:34.038653   42658 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0910 18:24:34.038657   42658 command_runner.go:130] > # ]
	I0910 18:24:34.038662   42658 command_runner.go:130] > # The port on which the metrics server will listen.
	I0910 18:24:34.038666   42658 command_runner.go:130] > # metrics_port = 9090
	I0910 18:24:34.038670   42658 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0910 18:24:34.038675   42658 command_runner.go:130] > # metrics_socket = ""
	I0910 18:24:34.038681   42658 command_runner.go:130] > # The certificate for the secure metrics server.
	I0910 18:24:34.038687   42658 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0910 18:24:34.038695   42658 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0910 18:24:34.038699   42658 command_runner.go:130] > # certificate on any modification event.
	I0910 18:24:34.038708   42658 command_runner.go:130] > # metrics_cert = ""
	I0910 18:24:34.038713   42658 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0910 18:24:34.038725   42658 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0910 18:24:34.038731   42658 command_runner.go:130] > # metrics_key = ""
	I0910 18:24:34.038741   42658 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0910 18:24:34.038747   42658 command_runner.go:130] > [crio.tracing]
	I0910 18:24:34.038752   42658 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0910 18:24:34.038759   42658 command_runner.go:130] > # enable_tracing = false
	I0910 18:24:34.038763   42658 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0910 18:24:34.038767   42658 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0910 18:24:34.038776   42658 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0910 18:24:34.038780   42658 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0910 18:24:34.038784   42658 command_runner.go:130] > # CRI-O NRI configuration.
	I0910 18:24:34.038789   42658 command_runner.go:130] > [crio.nri]
	I0910 18:24:34.038794   42658 command_runner.go:130] > # Globally enable or disable NRI.
	I0910 18:24:34.038798   42658 command_runner.go:130] > # enable_nri = false
	I0910 18:24:34.038802   42658 command_runner.go:130] > # NRI socket to listen on.
	I0910 18:24:34.038808   42658 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0910 18:24:34.038813   42658 command_runner.go:130] > # NRI plugin directory to use.
	I0910 18:24:34.038820   42658 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0910 18:24:34.038825   42658 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0910 18:24:34.038835   42658 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0910 18:24:34.038842   42658 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0910 18:24:34.038851   42658 command_runner.go:130] > # nri_disable_connections = false
	I0910 18:24:34.038861   42658 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0910 18:24:34.038869   42658 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0910 18:24:34.038876   42658 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0910 18:24:34.038886   42658 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0910 18:24:34.038895   42658 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0910 18:24:34.038903   42658 command_runner.go:130] > [crio.stats]
	I0910 18:24:34.038915   42658 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0910 18:24:34.038925   42658 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0910 18:24:34.038934   42658 command_runner.go:130] > # stats_collection_period = 0
	I0910 18:24:34.039136   42658 cni.go:84] Creating CNI manager for ""
	I0910 18:24:34.039153   42658 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 18:24:34.039172   42658 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:24:34.039193   42658 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-925076 NodeName:multinode-925076 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:24:34.039343   42658 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-925076"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:24:34.039402   42658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:24:34.050273   42658 command_runner.go:130] > kubeadm
	I0910 18:24:34.050294   42658 command_runner.go:130] > kubectl
	I0910 18:24:34.050298   42658 command_runner.go:130] > kubelet
	I0910 18:24:34.050321   42658 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:24:34.050401   42658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:24:34.060802   42658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0910 18:24:34.077840   42658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:24:34.094446   42658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0910 18:24:34.110951   42658 ssh_runner.go:195] Run: grep 192.168.39.248	control-plane.minikube.internal$ /etc/hosts
	I0910 18:24:34.115291   42658 command_runner.go:130] > 192.168.39.248	control-plane.minikube.internal
	I0910 18:24:34.115371   42658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:24:34.253785   42658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:24:34.268947   42658 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076 for IP: 192.168.39.248
	I0910 18:24:34.268980   42658 certs.go:194] generating shared ca certs ...
	I0910 18:24:34.269000   42658 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:24:34.269203   42658 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:24:34.269246   42658 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:24:34.269256   42658 certs.go:256] generating profile certs ...
	I0910 18:24:34.269343   42658 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/client.key
	I0910 18:24:34.269392   42658 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.key.b9c1a60e
	I0910 18:24:34.269440   42658 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.key
	I0910 18:24:34.269451   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:24:34.269462   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:24:34.269472   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:24:34.269490   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:24:34.269502   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:24:34.269513   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:24:34.269525   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:24:34.269536   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:24:34.269591   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:24:34.269617   42658 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:24:34.269626   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:24:34.269648   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:24:34.269669   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:24:34.269690   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:24:34.269726   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:24:34.269750   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.269762   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.269774   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.271237   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:24:34.295596   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:24:34.318217   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:24:34.341332   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:24:34.364832   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:24:34.388027   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:24:34.411338   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:24:34.434423   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:24:34.457823   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:24:34.480374   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:24:34.503236   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:24:34.525380   42658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:24:34.541968   42658 ssh_runner.go:195] Run: openssl version
	I0910 18:24:34.548233   42658 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 18:24:34.548316   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:24:34.559413   42658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.563715   42658 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.563934   42658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.563983   42658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.569348   42658 command_runner.go:130] > b5213941
	I0910 18:24:34.569403   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:24:34.578712   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:24:34.589392   42658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.593690   42658 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.593758   42658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.593807   42658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.599246   42658 command_runner.go:130] > 51391683
	I0910 18:24:34.599303   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:24:34.608977   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:24:34.619679   42658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.623904   42658 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.623968   42658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.624013   42658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.629451   42658 command_runner.go:130] > 3ec20f2e
	I0910 18:24:34.629515   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:24:34.638807   42658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:24:34.643472   42658 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:24:34.643489   42658 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0910 18:24:34.643495   42658 command_runner.go:130] > Device: 253,1	Inode: 532758      Links: 1
	I0910 18:24:34.643503   42658 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 18:24:34.643519   42658 command_runner.go:130] > Access: 2024-09-10 18:17:54.803795567 +0000
	I0910 18:24:34.643530   42658 command_runner.go:130] > Modify: 2024-09-10 18:17:54.803795567 +0000
	I0910 18:24:34.643538   42658 command_runner.go:130] > Change: 2024-09-10 18:17:54.803795567 +0000
	I0910 18:24:34.643544   42658 command_runner.go:130] >  Birth: 2024-09-10 18:17:54.803795567 +0000
	I0910 18:24:34.643648   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:24:34.649056   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.649123   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:24:34.654495   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.654543   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:24:34.659805   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.659850   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:24:34.665025   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.665267   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:24:34.670595   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.670646   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:24:34.676386   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.676459   42658 kubeadm.go:392] StartCluster: {Name:multinode-925076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-925076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:24:34.676572   42658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:24:34.676619   42658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:24:34.711462   42658 command_runner.go:130] > 7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b
	I0910 18:24:34.711484   42658 command_runner.go:130] > 267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733
	I0910 18:24:34.711493   42658 command_runner.go:130] > b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8
	I0910 18:24:34.711503   42658 command_runner.go:130] > 4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b
	I0910 18:24:34.711512   42658 command_runner.go:130] > 248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113
	I0910 18:24:34.711522   42658 command_runner.go:130] > 5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b
	I0910 18:24:34.711533   42658 command_runner.go:130] > 48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3
	I0910 18:24:34.711546   42658 command_runner.go:130] > e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246
	I0910 18:24:34.711573   42658 cri.go:89] found id: "7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b"
	I0910 18:24:34.711585   42658 cri.go:89] found id: "267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733"
	I0910 18:24:34.711590   42658 cri.go:89] found id: "b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8"
	I0910 18:24:34.711598   42658 cri.go:89] found id: "4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b"
	I0910 18:24:34.711603   42658 cri.go:89] found id: "248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113"
	I0910 18:24:34.711610   42658 cri.go:89] found id: "5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b"
	I0910 18:24:34.711614   42658 cri.go:89] found id: "48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3"
	I0910 18:24:34.711617   42658 cri.go:89] found id: "e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246"
	I0910 18:24:34.711619   42658 cri.go:89] found id: ""
	I0910 18:24:34.711656   42658 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.135586260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992780135556908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42b87ca4-37a9-46b4-9826-03bf82ff5918 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.136394104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91737040-1689-4a33-b6fe-a618e676bc84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.136448012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91737040-1689-4a33-b6fe-a618e676bc84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.136765526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a12f6a1f0c5a4e81403bc41c67a11ab96b43778e7184080cf02e7ba163e063c,PodSandboxId:e074512c790dca6c96654d28cb0bbfd406f66fe8c7216e203d238708c7306a50,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725992716258951136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14,PodSandboxId:c860108c71b51ef2f506e83497b469d883cb52e402792cc24ae609305de0d131,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725992682870687072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19,PodSandboxId:9621e71c7ef0b52b444749f3f91d6f4dc685162fe532473697646e84a303e8ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725992682709148360,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339
d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b457507d0da41ddc5a8e0de89599c9d69bb5914f1d111fafaee11725308027,PodSandboxId:5d9c945f08338d367db919f2c522efa4d25542fb2dabc9e2fee91d73b682f230,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725992682620955813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab,PodSandboxId:7682c5501f3c75cdb823447d1c7796fd87a095d7505731e37bffbaef16dcea90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725992682529540534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44,PodSandboxId:5031d687fbe639f03d115771c219a53896ffd9d6c0d7c484dd5dfdf69fdc20a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725992677201116952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f,PodSandboxId:824c9578b1825f70f636251cb32bcad2f3492251600131d8443ab25e42acb3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725992677170605009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e65146d574eda52f42b,},Annotations:map[string]string{io.kube
rnetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651,PodSandboxId:f20d6e21051392dcbef0f036469b4781da05bdbd7682dc10c9788386b99349ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725992677103413113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9,PodSandboxId:20a90a3fa8dfed8490e7f98060a0224e87b9a24a5a685b84f32dbd05d4bfea61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725992677040727967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5322d1fb1585e4fa1c8971d009fbd9e36a75e5d96c52a077ed858c5aba3f6,PodSandboxId:be7348f29e3a9f25bab84aea9e87da0b7a7c29397c3a066b0779fc5dd28e8d03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725992357209488442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b,PodSandboxId:b6c9ffea7d3910bb271189f7e25fbf13940ba77f177ed56a23a34e71a450243d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725992303859804144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733,PodSandboxId:26fe6d01b14dfd0fd19b712cb4a66352d5c855cfa59ab7cd1d8be99bf578121b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725992302943489138,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8,PodSandboxId:410f02f2b92394d7dda8bb2e446e84352b4226819e677b19e6419d2987de3280,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725992291394474064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b,PodSandboxId:89879f1f08d805e5cc92a9c935f524912704ed7efb6df9e3882fe4a02bbccc45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725992289112428695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113,PodSandboxId:fe4bd7a4f5685d3dbd61f44efa67ded6128a43cae3cf82a63309282654e49824,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725992278504516985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3,PodSandboxId:ae804a52d8bc7ec4957e30a8ec60cea6420472750c0be91a027e84f72b04cfc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725992278440959946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e651
46d574eda52f42b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b,PodSandboxId:9321f7e4ce4d46fc1090899c779bcb495279a02d61a1bd45b2a4f7e9a62ff419,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725992278475986554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246,PodSandboxId:2ff3feb0b23e17fe12c300ef2d520bd085ac4ad913eadaaed21e8ea83c345735,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725992278430880301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91737040-1689-4a33-b6fe-a618e676bc84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.187018374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b308c61c-7890-4910-878c-5b76139bdc54 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.187176516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b308c61c-7890-4910-878c-5b76139bdc54 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.189944882Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73667dc8-3045-4227-b93b-613d1336c3d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.190389821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992780190362866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73667dc8-3045-4227-b93b-613d1336c3d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.191113609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26ba03ff-6477-4e6e-8e71-5e80a7bca4db name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.191176196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26ba03ff-6477-4e6e-8e71-5e80a7bca4db name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.191527675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a12f6a1f0c5a4e81403bc41c67a11ab96b43778e7184080cf02e7ba163e063c,PodSandboxId:e074512c790dca6c96654d28cb0bbfd406f66fe8c7216e203d238708c7306a50,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725992716258951136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14,PodSandboxId:c860108c71b51ef2f506e83497b469d883cb52e402792cc24ae609305de0d131,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725992682870687072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19,PodSandboxId:9621e71c7ef0b52b444749f3f91d6f4dc685162fe532473697646e84a303e8ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725992682709148360,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339
d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b457507d0da41ddc5a8e0de89599c9d69bb5914f1d111fafaee11725308027,PodSandboxId:5d9c945f08338d367db919f2c522efa4d25542fb2dabc9e2fee91d73b682f230,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725992682620955813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab,PodSandboxId:7682c5501f3c75cdb823447d1c7796fd87a095d7505731e37bffbaef16dcea90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725992682529540534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44,PodSandboxId:5031d687fbe639f03d115771c219a53896ffd9d6c0d7c484dd5dfdf69fdc20a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725992677201116952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f,PodSandboxId:824c9578b1825f70f636251cb32bcad2f3492251600131d8443ab25e42acb3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725992677170605009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e65146d574eda52f42b,},Annotations:map[string]string{io.kube
rnetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651,PodSandboxId:f20d6e21051392dcbef0f036469b4781da05bdbd7682dc10c9788386b99349ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725992677103413113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9,PodSandboxId:20a90a3fa8dfed8490e7f98060a0224e87b9a24a5a685b84f32dbd05d4bfea61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725992677040727967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5322d1fb1585e4fa1c8971d009fbd9e36a75e5d96c52a077ed858c5aba3f6,PodSandboxId:be7348f29e3a9f25bab84aea9e87da0b7a7c29397c3a066b0779fc5dd28e8d03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725992357209488442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b,PodSandboxId:b6c9ffea7d3910bb271189f7e25fbf13940ba77f177ed56a23a34e71a450243d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725992303859804144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733,PodSandboxId:26fe6d01b14dfd0fd19b712cb4a66352d5c855cfa59ab7cd1d8be99bf578121b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725992302943489138,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8,PodSandboxId:410f02f2b92394d7dda8bb2e446e84352b4226819e677b19e6419d2987de3280,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725992291394474064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b,PodSandboxId:89879f1f08d805e5cc92a9c935f524912704ed7efb6df9e3882fe4a02bbccc45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725992289112428695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113,PodSandboxId:fe4bd7a4f5685d3dbd61f44efa67ded6128a43cae3cf82a63309282654e49824,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725992278504516985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3,PodSandboxId:ae804a52d8bc7ec4957e30a8ec60cea6420472750c0be91a027e84f72b04cfc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725992278440959946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e651
46d574eda52f42b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b,PodSandboxId:9321f7e4ce4d46fc1090899c779bcb495279a02d61a1bd45b2a4f7e9a62ff419,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725992278475986554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246,PodSandboxId:2ff3feb0b23e17fe12c300ef2d520bd085ac4ad913eadaaed21e8ea83c345735,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725992278430880301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26ba03ff-6477-4e6e-8e71-5e80a7bca4db name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.233094168Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f70d17b-4f3d-47c0-8543-21df0e35b195 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.233170421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f70d17b-4f3d-47c0-8543-21df0e35b195 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.234150205Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a8a70c9-a980-492e-8277-2ef006060e51 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.234546859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992780234527068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a8a70c9-a980-492e-8277-2ef006060e51 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.235129627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=371cf639-9183-4d3b-9525-999856a6cb03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.235184167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=371cf639-9183-4d3b-9525-999856a6cb03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.235542951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a12f6a1f0c5a4e81403bc41c67a11ab96b43778e7184080cf02e7ba163e063c,PodSandboxId:e074512c790dca6c96654d28cb0bbfd406f66fe8c7216e203d238708c7306a50,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725992716258951136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14,PodSandboxId:c860108c71b51ef2f506e83497b469d883cb52e402792cc24ae609305de0d131,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725992682870687072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19,PodSandboxId:9621e71c7ef0b52b444749f3f91d6f4dc685162fe532473697646e84a303e8ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725992682709148360,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339
d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b457507d0da41ddc5a8e0de89599c9d69bb5914f1d111fafaee11725308027,PodSandboxId:5d9c945f08338d367db919f2c522efa4d25542fb2dabc9e2fee91d73b682f230,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725992682620955813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab,PodSandboxId:7682c5501f3c75cdb823447d1c7796fd87a095d7505731e37bffbaef16dcea90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725992682529540534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44,PodSandboxId:5031d687fbe639f03d115771c219a53896ffd9d6c0d7c484dd5dfdf69fdc20a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725992677201116952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f,PodSandboxId:824c9578b1825f70f636251cb32bcad2f3492251600131d8443ab25e42acb3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725992677170605009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e65146d574eda52f42b,},Annotations:map[string]string{io.kube
rnetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651,PodSandboxId:f20d6e21051392dcbef0f036469b4781da05bdbd7682dc10c9788386b99349ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725992677103413113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9,PodSandboxId:20a90a3fa8dfed8490e7f98060a0224e87b9a24a5a685b84f32dbd05d4bfea61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725992677040727967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5322d1fb1585e4fa1c8971d009fbd9e36a75e5d96c52a077ed858c5aba3f6,PodSandboxId:be7348f29e3a9f25bab84aea9e87da0b7a7c29397c3a066b0779fc5dd28e8d03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725992357209488442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b,PodSandboxId:b6c9ffea7d3910bb271189f7e25fbf13940ba77f177ed56a23a34e71a450243d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725992303859804144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733,PodSandboxId:26fe6d01b14dfd0fd19b712cb4a66352d5c855cfa59ab7cd1d8be99bf578121b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725992302943489138,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8,PodSandboxId:410f02f2b92394d7dda8bb2e446e84352b4226819e677b19e6419d2987de3280,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725992291394474064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b,PodSandboxId:89879f1f08d805e5cc92a9c935f524912704ed7efb6df9e3882fe4a02bbccc45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725992289112428695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113,PodSandboxId:fe4bd7a4f5685d3dbd61f44efa67ded6128a43cae3cf82a63309282654e49824,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725992278504516985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3,PodSandboxId:ae804a52d8bc7ec4957e30a8ec60cea6420472750c0be91a027e84f72b04cfc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725992278440959946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e651
46d574eda52f42b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b,PodSandboxId:9321f7e4ce4d46fc1090899c779bcb495279a02d61a1bd45b2a4f7e9a62ff419,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725992278475986554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246,PodSandboxId:2ff3feb0b23e17fe12c300ef2d520bd085ac4ad913eadaaed21e8ea83c345735,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725992278430880301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=371cf639-9183-4d3b-9525-999856a6cb03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.277310985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f457d508-a582-4abe-9035-7c6042b992fd name=/runtime.v1.RuntimeService/Version
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.277391031Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f457d508-a582-4abe-9035-7c6042b992fd name=/runtime.v1.RuntimeService/Version
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.278304058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5b1eaf3-5f4f-49a3-8477-1c4010287176 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.278706435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992780278686270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5b1eaf3-5f4f-49a3-8477-1c4010287176 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.279207232Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd962d08-7f47-4987-8f40-f4e4599e652e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.279261840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd962d08-7f47-4987-8f40-f4e4599e652e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:26:20 multinode-925076 crio[2744]: time="2024-09-10 18:26:20.279591335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a12f6a1f0c5a4e81403bc41c67a11ab96b43778e7184080cf02e7ba163e063c,PodSandboxId:e074512c790dca6c96654d28cb0bbfd406f66fe8c7216e203d238708c7306a50,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725992716258951136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14,PodSandboxId:c860108c71b51ef2f506e83497b469d883cb52e402792cc24ae609305de0d131,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725992682870687072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19,PodSandboxId:9621e71c7ef0b52b444749f3f91d6f4dc685162fe532473697646e84a303e8ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725992682709148360,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339
d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b457507d0da41ddc5a8e0de89599c9d69bb5914f1d111fafaee11725308027,PodSandboxId:5d9c945f08338d367db919f2c522efa4d25542fb2dabc9e2fee91d73b682f230,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725992682620955813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab,PodSandboxId:7682c5501f3c75cdb823447d1c7796fd87a095d7505731e37bffbaef16dcea90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725992682529540534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44,PodSandboxId:5031d687fbe639f03d115771c219a53896ffd9d6c0d7c484dd5dfdf69fdc20a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725992677201116952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f,PodSandboxId:824c9578b1825f70f636251cb32bcad2f3492251600131d8443ab25e42acb3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725992677170605009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e65146d574eda52f42b,},Annotations:map[string]string{io.kube
rnetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651,PodSandboxId:f20d6e21051392dcbef0f036469b4781da05bdbd7682dc10c9788386b99349ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725992677103413113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9,PodSandboxId:20a90a3fa8dfed8490e7f98060a0224e87b9a24a5a685b84f32dbd05d4bfea61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725992677040727967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5322d1fb1585e4fa1c8971d009fbd9e36a75e5d96c52a077ed858c5aba3f6,PodSandboxId:be7348f29e3a9f25bab84aea9e87da0b7a7c29397c3a066b0779fc5dd28e8d03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725992357209488442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b,PodSandboxId:b6c9ffea7d3910bb271189f7e25fbf13940ba77f177ed56a23a34e71a450243d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725992303859804144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733,PodSandboxId:26fe6d01b14dfd0fd19b712cb4a66352d5c855cfa59ab7cd1d8be99bf578121b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725992302943489138,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8,PodSandboxId:410f02f2b92394d7dda8bb2e446e84352b4226819e677b19e6419d2987de3280,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725992291394474064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b,PodSandboxId:89879f1f08d805e5cc92a9c935f524912704ed7efb6df9e3882fe4a02bbccc45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725992289112428695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113,PodSandboxId:fe4bd7a4f5685d3dbd61f44efa67ded6128a43cae3cf82a63309282654e49824,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725992278504516985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3,PodSandboxId:ae804a52d8bc7ec4957e30a8ec60cea6420472750c0be91a027e84f72b04cfc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725992278440959946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e651
46d574eda52f42b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b,PodSandboxId:9321f7e4ce4d46fc1090899c779bcb495279a02d61a1bd45b2a4f7e9a62ff419,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725992278475986554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246,PodSandboxId:2ff3feb0b23e17fe12c300ef2d520bd085ac4ad913eadaaed21e8ea83c345735,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725992278430880301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd962d08-7f47-4987-8f40-f4e4599e652e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0a12f6a1f0c5a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   e074512c790dc       busybox-7dff88458-gbtc6
	2f3aea89b49de       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   c860108c71b51       coredns-6f6b679f8f-4dglr
	1f7c6eb1d280f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   9621e71c7ef0b       kindnet-d2n7r
	27b457507d0da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   5d9c945f08338       storage-provisioner
	39bfc244fa885       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   7682c5501f3c7       kube-proxy-j26sr
	da5c9818ec212       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   5031d687fbe63       kube-controller-manager-multinode-925076
	8e57778740f10       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   824c9578b1825       kube-scheduler-multinode-925076
	51637becea86d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   f20d6e2105139       etcd-multinode-925076
	8c33747b9a7e3       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   20a90a3fa8dfe       kube-apiserver-multinode-925076
	f2c5322d1fb15       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   be7348f29e3a9       busybox-7dff88458-gbtc6
	7e28c3bf386c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   b6c9ffea7d391       coredns-6f6b679f8f-4dglr
	267ae04b613e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   26fe6d01b14df       storage-provisioner
	b4b03eebef957       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   410f02f2b9239       kindnet-d2n7r
	4648cdf59f3f3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   89879f1f08d80       kube-proxy-j26sr
	248fbf0cae534       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   fe4bd7a4f5685       kube-apiserver-multinode-925076
	5e4c3672b3e4d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   9321f7e4ce4d4       etcd-multinode-925076
	48859d1709a7b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   ae804a52d8bc7       kube-scheduler-multinode-925076
	e6c580dc81be2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   2ff3feb0b23e1       kube-controller-manager-multinode-925076
	
	
	==> coredns [2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36541 - 20701 "HINFO IN 3910805571411210170.1997818468255851267. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012175401s
	
	
	==> coredns [7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b] <==
	[INFO] 10.244.1.2:58085 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001577877s
	[INFO] 10.244.1.2:48948 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077666s
	[INFO] 10.244.1.2:44256 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062484s
	[INFO] 10.244.1.2:35064 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001021504s
	[INFO] 10.244.1.2:56743 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007797s
	[INFO] 10.244.1.2:35138 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085117s
	[INFO] 10.244.1.2:40977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079006s
	[INFO] 10.244.0.3:54692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009979s
	[INFO] 10.244.0.3:40426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009554s
	[INFO] 10.244.0.3:51805 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067476s
	[INFO] 10.244.0.3:42333 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069781s
	[INFO] 10.244.1.2:54055 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106117s
	[INFO] 10.244.1.2:44588 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106247s
	[INFO] 10.244.1.2:59910 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078344s
	[INFO] 10.244.1.2:41523 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072519s
	[INFO] 10.244.0.3:39614 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118216s
	[INFO] 10.244.0.3:47589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110996s
	[INFO] 10.244.0.3:49606 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078004s
	[INFO] 10.244.0.3:49841 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092246s
	[INFO] 10.244.1.2:42558 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183375s
	[INFO] 10.244.1.2:53210 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101906s
	[INFO] 10.244.1.2:52654 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079637s
	[INFO] 10.244.1.2:46369 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068024s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-925076
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-925076
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-925076
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_18_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:18:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-925076
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:26:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:24:40 +0000   Tue, 10 Sep 2024 18:17:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:24:40 +0000   Tue, 10 Sep 2024 18:17:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:24:40 +0000   Tue, 10 Sep 2024 18:17:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:24:40 +0000   Tue, 10 Sep 2024 18:18:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    multinode-925076
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c5fbae42c9740639faedcc3dd37cd0c
	  System UUID:                6c5fbae4-2c97-4063-9fae-dcc3dd37cd0c
	  Boot ID:                    13243f56-bc41-4383-9f8c-f52b33ae4478
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gbtc6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 coredns-6f6b679f8f-4dglr                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m12s
	  kube-system                 etcd-multinode-925076                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m17s
	  kube-system                 kindnet-d2n7r                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m12s
	  kube-system                 kube-apiserver-multinode-925076             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 kube-controller-manager-multinode-925076    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-proxy-j26sr                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-scheduler-multinode-925076             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m10s                  kube-proxy       
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  Starting                 8m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m23s (x8 over 8m23s)  kubelet          Node multinode-925076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s (x8 over 8m23s)  kubelet          Node multinode-925076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s (x7 over 8m23s)  kubelet          Node multinode-925076 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m17s                  kubelet          Node multinode-925076 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m17s                  kubelet          Node multinode-925076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m17s                  kubelet          Node multinode-925076 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m13s                  node-controller  Node multinode-925076 event: Registered Node multinode-925076 in Controller
	  Normal  NodeReady                7m58s                  kubelet          Node multinode-925076 status is now: NodeReady
	  Normal  Starting                 104s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-925076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-925076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-925076 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                    node-controller  Node multinode-925076 event: Registered Node multinode-925076 in Controller
	
	
	Name:               multinode-925076-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-925076-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-925076
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T18_25_22_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:25:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-925076-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:26:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:25:52 +0000   Tue, 10 Sep 2024 18:25:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:25:52 +0000   Tue, 10 Sep 2024 18:25:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:25:52 +0000   Tue, 10 Sep 2024 18:25:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:25:52 +0000   Tue, 10 Sep 2024 18:25:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    multinode-925076-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa7366426b6546c29cbf192a51fa99e6
	  System UUID:                aa736642-6b65-46c2-9cbf-192a51fa99e6
	  Boot ID:                    b24c0be9-954a-49a4-ae50-c386116638b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-59xdp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kindnet-hwts7              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m26s
	  kube-system                 kube-proxy-vpg55           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m21s                  kube-proxy  
	  Normal  Starting                 54s                    kube-proxy  
	  Normal  NodeAllocatableEnforced  7m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m26s (x2 over 7m27s)  kubelet     Node multinode-925076-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m26s (x2 over 7m27s)  kubelet     Node multinode-925076-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m26s (x2 over 7m27s)  kubelet     Node multinode-925076-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m8s                   kubelet     Node multinode-925076-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)      kubelet     Node multinode-925076-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)      kubelet     Node multinode-925076-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)      kubelet     Node multinode-925076-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-925076-m02 status is now: NodeReady
	
	
	Name:               multinode-925076-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-925076-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-925076
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T18_26_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:25:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-925076-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:26:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:26:17 +0000   Tue, 10 Sep 2024 18:25:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:26:17 +0000   Tue, 10 Sep 2024 18:25:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:26:17 +0000   Tue, 10 Sep 2024 18:25:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:26:17 +0000   Tue, 10 Sep 2024 18:26:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    multinode-925076-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 394f86f68bfe466eb0176ade52f577a3
	  System UUID:                394f86f6-8bfe-466e-b017-6ade52f577a3
	  Boot ID:                    c7f48350-a486-48bb-a31f-8c0264965ad0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rnchg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m35s
	  kube-system                 kube-proxy-lsjg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m42s                  kube-proxy       
	  Normal  Starting                 6m30s                  kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m35s)  kubelet          Node multinode-925076-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m35s)  kubelet          Node multinode-925076-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m35s)  kubelet          Node multinode-925076-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m16s                  kubelet          Node multinode-925076-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m47s (x2 over 5m47s)  kubelet          Node multinode-925076-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s (x2 over 5m47s)  kubelet          Node multinode-925076-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m47s (x2 over 5m47s)  kubelet          Node multinode-925076-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m29s                  kubelet          Node multinode-925076-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet          Node multinode-925076-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet          Node multinode-925076-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet          Node multinode-925076-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                    node-controller  Node multinode-925076-m03 event: Registered Node multinode-925076-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-925076-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.054862] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.187431] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.126014] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.284016] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.883090] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.029593] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.075891] kauditd_printk_skb: 158 callbacks suppressed
	[Sep10 18:18] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.089532] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.636176] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.152716] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[ +13.703163] kauditd_printk_skb: 60 callbacks suppressed
	[Sep10 18:19] kauditd_printk_skb: 14 callbacks suppressed
	[Sep10 18:24] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.143662] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.168714] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.132316] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.272309] systemd-fstab-generator[2734]: Ignoring "noauto" option for root device
	[  +5.318092] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.079860] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.960659] systemd-fstab-generator[2950]: Ignoring "noauto" option for root device
	[  +6.201056] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.935796] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +0.100573] kauditd_printk_skb: 36 callbacks suppressed
	[Sep10 18:25] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651] <==
	{"level":"info","ts":"2024-09-10T18:24:37.525948Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ffc6a57a6de49e73","local-member-id":"1aa4f7d85b49255a","added-peer-id":"1aa4f7d85b49255a","added-peer-peer-urls":["https://192.168.39.248:2380"]}
	{"level":"info","ts":"2024-09-10T18:24:37.526099Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ffc6a57a6de49e73","local-member-id":"1aa4f7d85b49255a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:24:37.526149Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:24:37.552125Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:24:37.555383Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T18:24:37.561217Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1aa4f7d85b49255a","initial-advertise-peer-urls":["https://192.168.39.248:2380"],"listen-peer-urls":["https://192.168.39.248:2380"],"advertise-client-urls":["https://192.168.39.248:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.248:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T18:24:37.561356Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T18:24:37.561753Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-09-10T18:24:37.565909Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-09-10T18:24:39.081905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-10T18:24:39.081951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T18:24:39.081987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a received MsgPreVoteResp from 1aa4f7d85b49255a at term 2"}
	{"level":"info","ts":"2024-09-10T18:24:39.081999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T18:24:39.082005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a received MsgVoteResp from 1aa4f7d85b49255a at term 3"}
	{"level":"info","ts":"2024-09-10T18:24:39.082014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a became leader at term 3"}
	{"level":"info","ts":"2024-09-10T18:24:39.082021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1aa4f7d85b49255a elected leader 1aa4f7d85b49255a at term 3"}
	{"level":"info","ts":"2024-09-10T18:24:39.087775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:24:39.088797Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:24:39.087737Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1aa4f7d85b49255a","local-member-attributes":"{Name:multinode-925076 ClientURLs:[https://192.168.39.248:2379]}","request-path":"/0/members/1aa4f7d85b49255a/attributes","cluster-id":"ffc6a57a6de49e73","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:24:39.089176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:24:39.089455Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:24:39.089493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:24:39.089779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.248:2379"}
	{"level":"info","ts":"2024-09-10T18:24:39.090234Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:24:39.091204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b] <==
	{"level":"info","ts":"2024-09-10T18:17:59.011083Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1aa4f7d85b49255a","local-member-attributes":"{Name:multinode-925076 ClientURLs:[https://192.168.39.248:2379]}","request-path":"/0/members/1aa4f7d85b49255a/attributes","cluster-id":"ffc6a57a6de49e73","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:17:59.011271Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:17:59.011574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:17:59.011916Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:17:59.016879Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:17:59.016951Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:17:59.018722Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:17:59.019577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.248:2379"}
	{"level":"info","ts":"2024-09-10T18:17:59.014651Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:17:59.030624Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T18:17:59.031960Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ffc6a57a6de49e73","local-member-id":"1aa4f7d85b49255a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:17:59.069727Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:17:59.112150Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:18:53.884265Z","caller":"traceutil/trace.go:171","msg":"trace[1652183131] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"102.802549ms","start":"2024-09-10T18:18:53.781437Z","end":"2024-09-10T18:18:53.884239Z","steps":["trace[1652183131] 'process raft request'  (duration: 88.674128ms)","trace[1652183131] 'compare'  (duration: 14.049045ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T18:20:38.027101Z","caller":"traceutil/trace.go:171","msg":"trace[1798120250] transaction","detail":"{read_only:false; response_revision:738; number_of_response:1; }","duration":"139.268402ms","start":"2024-09-10T18:20:37.887720Z","end":"2024-09-10T18:20:38.026988Z","steps":["trace[1798120250] 'process raft request'  (duration: 138.059865ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T18:22:57.002394Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-10T18:22:57.002544Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-925076","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.248:2380"],"advertise-client-urls":["https://192.168.39.248:2379"]}
	{"level":"warn","ts":"2024-09-10T18:22:57.002715Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T18:22:57.002814Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T18:22:57.081545Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.248:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T18:22:57.082005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.248:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-10T18:22:57.083242Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1aa4f7d85b49255a","current-leader-member-id":"1aa4f7d85b49255a"}
	{"level":"info","ts":"2024-09-10T18:22:57.085631Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-09-10T18:22:57.085809Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-09-10T18:22:57.085908Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-925076","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.248:2380"],"advertise-client-urls":["https://192.168.39.248:2379"]}
	
	
	==> kernel <==
	 18:26:20 up 8 min,  0 users,  load average: 0.19, 0.14, 0.10
	Linux multinode-925076 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19] <==
	I0910 18:25:33.884805       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:25:43.883743       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:25:43.883965       1 main.go:299] handling current node
	I0910 18:25:43.884052       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:25:43.884090       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:25:43.884314       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:25:43.884348       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:25:53.883870       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:25:53.883908       1 main.go:299] handling current node
	I0910 18:25:53.883936       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:25:53.883942       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:25:53.884075       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:25:53.884101       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:26:03.883583       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:26:03.883690       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.2.0/24] 
	I0910 18:26:03.883890       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:26:03.883995       1 main.go:299] handling current node
	I0910 18:26:03.884672       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:26:03.884808       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:26:13.884091       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:26:13.884234       1 main.go:299] handling current node
	I0910 18:26:13.884283       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:26:13.884313       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:26:13.884557       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:26:13.884902       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8] <==
	I0910 18:22:12.380813       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:22:22.386954       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:22:22.387009       1 main.go:299] handling current node
	I0910 18:22:22.387023       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:22:22.387028       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:22:22.387186       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:22:22.387210       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:22:32.390126       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:22:32.390257       1 main.go:299] handling current node
	I0910 18:22:32.390285       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:22:32.390302       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:22:32.390465       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:22:32.390489       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:22:42.380708       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:22:42.380926       1 main.go:299] handling current node
	I0910 18:22:42.380964       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:22:42.380985       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:22:42.381140       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:22:42.381162       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:22:52.389527       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:22:52.389578       1 main.go:299] handling current node
	I0910 18:22:52.389602       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:22:52.389622       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:22:52.389770       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:22:52.389775       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113] <==
	W0910 18:18:02.663011       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.248]
	I0910 18:18:02.664185       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:18:02.669046       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 18:18:03.024225       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:18:03.638766       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:18:03.655045       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0910 18:18:03.666989       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:18:08.574811       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0910 18:18:08.675616       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0910 18:19:18.340346       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56496: use of closed network connection
	E0910 18:19:18.503707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56510: use of closed network connection
	E0910 18:19:18.680385       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56518: use of closed network connection
	E0910 18:19:18.849007       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56528: use of closed network connection
	E0910 18:19:19.010593       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56546: use of closed network connection
	E0910 18:19:19.181646       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56558: use of closed network connection
	E0910 18:19:19.447171       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56598: use of closed network connection
	E0910 18:19:19.613066       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56616: use of closed network connection
	E0910 18:19:19.770561       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:43780: use of closed network connection
	E0910 18:19:19.928666       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:43794: use of closed network connection
	I0910 18:22:57.005763       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0910 18:22:57.017975       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:22:57.018307       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:22:57.019265       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:22:57.019329       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:22:57.019357       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9] <==
	I0910 18:24:40.493863       1 aggregator.go:171] initial CRD sync complete...
	I0910 18:24:40.493880       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 18:24:40.493885       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 18:24:40.493890       1 cache.go:39] Caches are synced for autoregister controller
	I0910 18:24:40.494436       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0910 18:24:40.501047       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:24:40.501083       1 policy_source.go:224] refreshing policies
	I0910 18:24:40.541390       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0910 18:24:40.541429       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 18:24:40.541810       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 18:24:40.542320       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 18:24:40.544511       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 18:24:40.544615       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 18:24:40.544950       1 shared_informer.go:320] Caches are synced for configmaps
	I0910 18:24:40.547354       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 18:24:40.555765       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0910 18:24:40.567750       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0910 18:24:41.355166       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0910 18:24:42.518545       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:24:43.073592       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:24:43.134250       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:24:43.279199       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 18:24:43.300944       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0910 18:24:43.991159       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:24:44.188486       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44] <==
	I0910 18:25:39.491556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:25:39.502107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:25:39.510160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.388µs"
	I0910 18:25:39.522159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.549µs"
	I0910 18:25:42.182352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.683068ms"
	I0910 18:25:42.182512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.337µs"
	I0910 18:25:43.877591       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:25:52.334586       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:25:58.267375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:25:58.282598       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:25:58.510149       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:25:58.510320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:25:59.593675       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-925076-m03\" does not exist"
	I0910 18:25:59.594398       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:25:59.614590       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-925076-m03" podCIDRs=["10.244.2.0/24"]
	I0910 18:25:59.614632       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:25:59.614657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:00.044790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:00.422611       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:03.985220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:09.872053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:17.323727       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:26:17.324088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:17.334343       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:18.898073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	
	
	==> kube-controller-manager [e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246] <==
	I0910 18:20:32.457131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:32.679753       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:32.681262       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:20:33.789003       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-925076-m03\" does not exist"
	I0910 18:20:33.789132       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:20:33.807543       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-925076-m03" podCIDRs=["10.244.3.0/24"]
	I0910 18:20:33.808105       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:33.808268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:34.213125       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:34.580528       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:38.029189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:43.907489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:51.539385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:51.539442       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:20:51.547896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:52.834394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:21:27.852145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:21:27.852533       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m03"
	I0910 18:21:27.881189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:21:27.945073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.040817ms"
	I0910 18:21:27.945369       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.835µs"
	I0910 18:21:32.986568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:21:37.933504       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:21:37.956680       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:21:43.058206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	
	
	==> kube-proxy [39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:24:43.077963       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:24:43.101732       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.248"]
	E0910 18:24:43.101876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:24:43.195217       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:24:43.195300       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:24:43.195333       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:24:43.201115       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:24:43.201465       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:24:43.201496       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:24:43.205950       1 config.go:197] "Starting service config controller"
	I0910 18:24:43.206000       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:24:43.206062       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:24:43.206084       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:24:43.206592       1 config.go:326] "Starting node config controller"
	I0910 18:24:43.206628       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:24:43.307117       1 shared_informer.go:320] Caches are synced for node config
	I0910 18:24:43.307203       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:24:43.307227       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:18:09.496560       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:18:09.513982       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.248"]
	E0910 18:18:09.514917       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:18:09.599791       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:18:09.599903       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:18:09.599929       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:18:09.610330       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:18:09.610572       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:18:09.610583       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:18:09.612269       1 config.go:197] "Starting service config controller"
	I0910 18:18:09.612278       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:18:09.612314       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:18:09.612318       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:18:09.612661       1 config.go:326] "Starting node config controller"
	I0910 18:18:09.612667       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:18:09.712633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:18:09.712678       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:18:09.712908       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3] <==
	E0910 18:18:01.045406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:01.045499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 18:18:01.045543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:01.045623       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 18:18:01.045662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:01.993502       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 18:18:01.993643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.032301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 18:18:02.032350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.042769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 18:18:02.042924       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.069385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 18:18:02.070596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.126704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 18:18:02.126812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.141544       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 18:18:02.141715       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 18:18:02.176216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 18:18:02.176268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.176877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 18:18:02.176954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.188456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 18:18:02.188503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 18:18:04.939221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 18:22:57.012708       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f] <==
	I0910 18:24:38.286733       1 serving.go:386] Generated self-signed cert in-memory
	W0910 18:24:40.376020       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 18:24:40.376123       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 18:24:40.376185       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 18:24:40.376197       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 18:24:40.474461       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 18:24:40.474520       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:24:40.480563       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 18:24:40.480747       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 18:24:40.480803       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 18:24:40.487604       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 18:24:40.581451       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 18:24:46 multinode-925076 kubelet[2957]: E0910 18:24:46.513248    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992686512370638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:24:46 multinode-925076 kubelet[2957]: E0910 18:24:46.513293    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992686512370638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:24:56 multinode-925076 kubelet[2957]: E0910 18:24:56.516348    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992696515136573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:24:56 multinode-925076 kubelet[2957]: E0910 18:24:56.516660    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992696515136573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:06 multinode-925076 kubelet[2957]: E0910 18:25:06.521938    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992706520721892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:06 multinode-925076 kubelet[2957]: E0910 18:25:06.522149    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992706520721892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:16 multinode-925076 kubelet[2957]: E0910 18:25:16.523454    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992716523122960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:16 multinode-925076 kubelet[2957]: E0910 18:25:16.523500    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992716523122960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:26 multinode-925076 kubelet[2957]: E0910 18:25:26.531601    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992726527356594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:26 multinode-925076 kubelet[2957]: E0910 18:25:26.532011    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992726527356594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:36 multinode-925076 kubelet[2957]: E0910 18:25:36.464890    2957 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:25:36 multinode-925076 kubelet[2957]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:25:36 multinode-925076 kubelet[2957]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:25:36 multinode-925076 kubelet[2957]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:25:36 multinode-925076 kubelet[2957]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:25:36 multinode-925076 kubelet[2957]: E0910 18:25:36.533318    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992736533139221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:36 multinode-925076 kubelet[2957]: E0910 18:25:36.533362    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992736533139221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:46 multinode-925076 kubelet[2957]: E0910 18:25:46.535235    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992746534470812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:46 multinode-925076 kubelet[2957]: E0910 18:25:46.536291    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992746534470812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:56 multinode-925076 kubelet[2957]: E0910 18:25:56.537613    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992756537372870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:25:56 multinode-925076 kubelet[2957]: E0910 18:25:56.537657    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992756537372870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:26:06 multinode-925076 kubelet[2957]: E0910 18:26:06.541691    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992766541466353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:26:06 multinode-925076 kubelet[2957]: E0910 18:26:06.542272    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992766541466353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:26:16 multinode-925076 kubelet[2957]: E0910 18:26:16.543379    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992776543065922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:26:16 multinode-925076 kubelet[2957]: E0910 18:26:16.543420    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992776543065922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:26:19.835544   43739 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19598-5973/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-925076 -n multinode-925076
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-925076 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 stop
E0910 18:26:35.174229   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:26:59.605779   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-925076 stop: exit status 82 (2m0.460831639s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-925076-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-925076 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-925076 status: exit status 3 (18.79218509s)

                                                
                                                
-- stdout --
	multinode-925076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-925076-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:28:43.233451   44414 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.31:22: connect: no route to host
	E0910 18:28:43.233495   44414 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.31:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-925076 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-925076 -n multinode-925076
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-925076 logs -n 25: (1.456883828s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m02:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076:/home/docker/cp-test_multinode-925076-m02_multinode-925076.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n multinode-925076 sudo cat                                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-925076-m02_multinode-925076.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m02:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03:/home/docker/cp-test_multinode-925076-m02_multinode-925076-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n multinode-925076-m03 sudo cat                                   | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-925076-m02_multinode-925076-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp testdata/cp-test.txt                                                | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m03:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2183249346/001/cp-test_multinode-925076-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m03:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076:/home/docker/cp-test_multinode-925076-m03_multinode-925076.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n multinode-925076 sudo cat                                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-925076-m03_multinode-925076.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-925076 cp multinode-925076-m03:/home/docker/cp-test.txt                       | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m02:/home/docker/cp-test_multinode-925076-m03_multinode-925076-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n                                                                 | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | multinode-925076-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-925076 ssh -n multinode-925076-m02 sudo cat                                   | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-925076-m03_multinode-925076-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-925076 node stop m03                                                          | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	| node    | multinode-925076 node start                                                             | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC | 10 Sep 24 18:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-925076                                                                | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC |                     |
	| stop    | -p multinode-925076                                                                     | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:20 UTC |                     |
	| start   | -p multinode-925076                                                                     | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:22 UTC | 10 Sep 24 18:26 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-925076                                                                | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:26 UTC |                     |
	| node    | multinode-925076 node delete                                                            | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:26 UTC | 10 Sep 24 18:26 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-925076 stop                                                                   | multinode-925076 | jenkins | v1.34.0 | 10 Sep 24 18:26 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:22:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:22:56.034651   42658 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:22:56.035085   42658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:22:56.035104   42658 out.go:358] Setting ErrFile to fd 2...
	I0910 18:22:56.035122   42658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:22:56.035588   42658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:22:56.036466   42658 out.go:352] Setting JSON to false
	I0910 18:22:56.037425   42658 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3928,"bootTime":1725988648,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:22:56.037490   42658 start.go:139] virtualization: kvm guest
	I0910 18:22:56.039270   42658 out.go:177] * [multinode-925076] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:22:56.040726   42658 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:22:56.040726   42658 notify.go:220] Checking for updates...
	I0910 18:22:56.043195   42658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:22:56.044422   42658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:22:56.045532   42658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:22:56.046685   42658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:22:56.047912   42658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:22:56.049332   42658 config.go:182] Loaded profile config "multinode-925076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:22:56.049456   42658 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:22:56.049880   42658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:22:56.049932   42658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:22:56.064454   42658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0910 18:22:56.064872   42658 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:22:56.065481   42658 main.go:141] libmachine: Using API Version  1
	I0910 18:22:56.065503   42658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:22:56.065813   42658 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:22:56.065968   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:22:56.100319   42658 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:22:56.101356   42658 start.go:297] selected driver: kvm2
	I0910 18:22:56.101369   42658 start.go:901] validating driver "kvm2" against &{Name:multinode-925076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-925076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:22:56.101501   42658 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:22:56.101799   42658 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:22:56.101860   42658 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:22:56.115445   42658 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:22:56.116085   42658 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:22:56.116146   42658 cni.go:84] Creating CNI manager for ""
	I0910 18:22:56.116156   42658 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 18:22:56.116202   42658 start.go:340] cluster config:
	{Name:multinode-925076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-925076 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:22:56.116335   42658 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:22:56.118668   42658 out.go:177] * Starting "multinode-925076" primary control-plane node in "multinode-925076" cluster
	I0910 18:22:56.119806   42658 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:22:56.119830   42658 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:22:56.119838   42658 cache.go:56] Caching tarball of preloaded images
	I0910 18:22:56.119898   42658 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:22:56.119907   42658 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 18:22:56.120024   42658 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/config.json ...
	I0910 18:22:56.120210   42658 start.go:360] acquireMachinesLock for multinode-925076: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:22:56.120247   42658 start.go:364] duration metric: took 21.961µs to acquireMachinesLock for "multinode-925076"
	I0910 18:22:56.120259   42658 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:22:56.120270   42658 fix.go:54] fixHost starting: 
	I0910 18:22:56.120510   42658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:22:56.120539   42658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:22:56.134186   42658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44625
	I0910 18:22:56.134611   42658 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:22:56.135085   42658 main.go:141] libmachine: Using API Version  1
	I0910 18:22:56.135107   42658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:22:56.135389   42658 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:22:56.135540   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:22:56.135710   42658 main.go:141] libmachine: (multinode-925076) Calling .GetState
	I0910 18:22:56.137098   42658 fix.go:112] recreateIfNeeded on multinode-925076: state=Running err=<nil>
	W0910 18:22:56.137114   42658 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:22:56.138927   42658 out.go:177] * Updating the running kvm2 "multinode-925076" VM ...
	I0910 18:22:56.140044   42658 machine.go:93] provisionDockerMachine start ...
	I0910 18:22:56.140062   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:22:56.140247   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.142701   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.143149   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.143176   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.143305   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.143446   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.143609   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.143740   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.143890   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:56.144073   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:22:56.144082   42658 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:22:56.254035   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-925076
	
	I0910 18:22:56.254062   42658 main.go:141] libmachine: (multinode-925076) Calling .GetMachineName
	I0910 18:22:56.254333   42658 buildroot.go:166] provisioning hostname "multinode-925076"
	I0910 18:22:56.254361   42658 main.go:141] libmachine: (multinode-925076) Calling .GetMachineName
	I0910 18:22:56.254562   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.257527   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.257840   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.257878   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.258029   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.258199   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.258372   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.258499   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.258691   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:56.258849   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:22:56.258863   42658 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-925076 && echo "multinode-925076" | sudo tee /etc/hostname
	I0910 18:22:56.381830   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-925076
	
	I0910 18:22:56.381859   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.384556   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.384939   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.384967   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.385140   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.385352   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.385517   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.385656   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.385788   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:56.386001   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:22:56.386018   42658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-925076' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-925076/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-925076' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:22:56.498460   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:22:56.498493   42658 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:22:56.498530   42658 buildroot.go:174] setting up certificates
	I0910 18:22:56.498540   42658 provision.go:84] configureAuth start
	I0910 18:22:56.498549   42658 main.go:141] libmachine: (multinode-925076) Calling .GetMachineName
	I0910 18:22:56.498852   42658 main.go:141] libmachine: (multinode-925076) Calling .GetIP
	I0910 18:22:56.501431   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.501879   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.501916   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.502101   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.504190   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.504515   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.504547   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.504696   42658 provision.go:143] copyHostCerts
	I0910 18:22:56.504731   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:22:56.504768   42658 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:22:56.504779   42658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:22:56.504850   42658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:22:56.504926   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:22:56.504943   42658 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:22:56.504950   42658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:22:56.504974   42658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:22:56.505015   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:22:56.505031   42658 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:22:56.505037   42658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:22:56.505063   42658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:22:56.505138   42658 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.multinode-925076 san=[127.0.0.1 192.168.39.248 localhost minikube multinode-925076]
	I0910 18:22:56.718149   42658 provision.go:177] copyRemoteCerts
	I0910 18:22:56.718206   42658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:22:56.718226   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.721188   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.721592   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.721619   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.721833   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.722074   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.722232   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.722385   42658 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076/id_rsa Username:docker}
	I0910 18:22:56.803240   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 18:22:56.803321   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:22:56.828440   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 18:22:56.828494   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0910 18:22:56.851622   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 18:22:56.851695   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:22:56.876280   42658 provision.go:87] duration metric: took 377.728415ms to configureAuth
	I0910 18:22:56.876305   42658 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:22:56.876528   42658 config.go:182] Loaded profile config "multinode-925076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:22:56.876597   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:22:56.879082   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.879452   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:22:56.879484   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:22:56.879650   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:22:56.879833   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.879971   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:22:56.880080   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:22:56.880267   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:22:56.880449   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:22:56.880470   42658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:24:27.511751   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:24:27.511785   42658 machine.go:96] duration metric: took 1m31.371726745s to provisionDockerMachine
	I0910 18:24:27.511805   42658 start.go:293] postStartSetup for "multinode-925076" (driver="kvm2")
	I0910 18:24:27.511843   42658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:24:27.511868   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.512240   42658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:24:27.512272   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:24:27.515268   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.515586   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.515609   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.515773   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:24:27.515953   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.516092   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:24:27.516219   42658 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076/id_rsa Username:docker}
	I0910 18:24:27.601195   42658 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:24:27.605580   42658 command_runner.go:130] > NAME=Buildroot
	I0910 18:24:27.605607   42658 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0910 18:24:27.605614   42658 command_runner.go:130] > ID=buildroot
	I0910 18:24:27.605620   42658 command_runner.go:130] > VERSION_ID=2023.02.9
	I0910 18:24:27.605628   42658 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0910 18:24:27.605801   42658 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:24:27.605823   42658 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:24:27.605907   42658 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:24:27.605987   42658 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:24:27.605996   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /etc/ssl/certs/131212.pem
	I0910 18:24:27.606071   42658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:24:27.615918   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:24:27.639523   42658 start.go:296] duration metric: took 127.706079ms for postStartSetup
	I0910 18:24:27.639579   42658 fix.go:56] duration metric: took 1m31.519296068s for fixHost
	I0910 18:24:27.639606   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:24:27.641810   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.642191   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.642215   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.642354   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:24:27.642543   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.642698   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.642817   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:24:27.642952   42658 main.go:141] libmachine: Using SSH client type: native
	I0910 18:24:27.643152   42658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0910 18:24:27.643163   42658 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:24:27.745591   42658 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725992667.718605174
	
	I0910 18:24:27.745613   42658 fix.go:216] guest clock: 1725992667.718605174
	I0910 18:24:27.745619   42658 fix.go:229] Guest: 2024-09-10 18:24:27.718605174 +0000 UTC Remote: 2024-09-10 18:24:27.639587581 +0000 UTC m=+91.639859880 (delta=79.017593ms)
	I0910 18:24:27.745649   42658 fix.go:200] guest clock delta is within tolerance: 79.017593ms
	I0910 18:24:27.745656   42658 start.go:83] releasing machines lock for "multinode-925076", held for 1m31.625400367s
	I0910 18:24:27.745686   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.745917   42658 main.go:141] libmachine: (multinode-925076) Calling .GetIP
	I0910 18:24:27.748131   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.748492   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.748529   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.748635   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.749097   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.749247   42658 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:24:27.749346   42658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:24:27.749415   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:24:27.749431   42658 ssh_runner.go:195] Run: cat /version.json
	I0910 18:24:27.749453   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:24:27.751781   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.751896   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.752176   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.752204   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.752316   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:24:27.752441   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:27.752470   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:27.752480   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.752652   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:24:27.752685   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:24:27.752809   42658 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076/id_rsa Username:docker}
	I0910 18:24:27.752871   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:24:27.753006   42658 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:24:27.753149   42658 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076/id_rsa Username:docker}
	I0910 18:24:27.829144   42658 command_runner.go:130] > {"iso_version": "v1.34.0-1725912912-19598", "kicbase_version": "v0.0.45", "minikube_version": "v1.34.0", "commit": "a47e98bacf93197560d0f08408949de0434951d5"}
	I0910 18:24:27.849471   42658 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0910 18:24:27.850229   42658 ssh_runner.go:195] Run: systemctl --version
	I0910 18:24:27.856126   42658 command_runner.go:130] > systemd 252 (252)
	I0910 18:24:27.856156   42658 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0910 18:24:27.856209   42658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:24:28.009619   42658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 18:24:28.017645   42658 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0910 18:24:28.017842   42658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:24:28.017899   42658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:24:28.027151   42658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 18:24:28.027170   42658 start.go:495] detecting cgroup driver to use...
	I0910 18:24:28.027219   42658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:24:28.042943   42658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:24:28.057013   42658 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:24:28.057064   42658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:24:28.069889   42658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:24:28.082724   42658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:24:28.225690   42658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:24:28.362524   42658 docker.go:233] disabling docker service ...
	I0910 18:24:28.362587   42658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:24:28.379066   42658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:24:28.392555   42658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:24:28.526705   42658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:24:28.665198   42658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:24:28.678339   42658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:24:28.698834   42658 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0910 18:24:28.698880   42658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:24:28.698920   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.709239   42658 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:24:28.709304   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.719409   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.729384   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.739496   42658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:24:28.749500   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.759515   42658 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.771235   42658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:24:28.781740   42658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:24:28.791329   42658 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0910 18:24:28.791395   42658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:24:28.802097   42658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:24:28.936470   42658 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:24:33.778790   42658 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.84228859s)
	I0910 18:24:33.778821   42658 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:24:33.778871   42658 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:24:33.784570   42658 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0910 18:24:33.784592   42658 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0910 18:24:33.784601   42658 command_runner.go:130] > Device: 0,22	Inode: 1313        Links: 1
	I0910 18:24:33.784611   42658 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 18:24:33.784619   42658 command_runner.go:130] > Access: 2024-09-10 18:24:33.695649200 +0000
	I0910 18:24:33.784633   42658 command_runner.go:130] > Modify: 2024-09-10 18:24:33.645647827 +0000
	I0910 18:24:33.784648   42658 command_runner.go:130] > Change: 2024-09-10 18:24:33.645647827 +0000
	I0910 18:24:33.784655   42658 command_runner.go:130] >  Birth: -
	I0910 18:24:33.784794   42658 start.go:563] Will wait 60s for crictl version
	I0910 18:24:33.784842   42658 ssh_runner.go:195] Run: which crictl
	I0910 18:24:33.788723   42658 command_runner.go:130] > /usr/bin/crictl
	I0910 18:24:33.788792   42658 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:24:33.833338   42658 command_runner.go:130] > Version:  0.1.0
	I0910 18:24:33.833360   42658 command_runner.go:130] > RuntimeName:  cri-o
	I0910 18:24:33.833365   42658 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0910 18:24:33.833370   42658 command_runner.go:130] > RuntimeApiVersion:  v1
	I0910 18:24:33.833386   42658 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:24:33.833454   42658 ssh_runner.go:195] Run: crio --version
	I0910 18:24:33.861992   42658 command_runner.go:130] > crio version 1.29.1
	I0910 18:24:33.862013   42658 command_runner.go:130] > Version:        1.29.1
	I0910 18:24:33.862019   42658 command_runner.go:130] > GitCommit:      unknown
	I0910 18:24:33.862023   42658 command_runner.go:130] > GitCommitDate:  unknown
	I0910 18:24:33.862027   42658 command_runner.go:130] > GitTreeState:   clean
	I0910 18:24:33.862035   42658 command_runner.go:130] > BuildDate:      2024-09-10T02:34:15Z
	I0910 18:24:33.862040   42658 command_runner.go:130] > GoVersion:      go1.21.6
	I0910 18:24:33.862043   42658 command_runner.go:130] > Compiler:       gc
	I0910 18:24:33.862053   42658 command_runner.go:130] > Platform:       linux/amd64
	I0910 18:24:33.862059   42658 command_runner.go:130] > Linkmode:       dynamic
	I0910 18:24:33.862068   42658 command_runner.go:130] > BuildTags:      
	I0910 18:24:33.862075   42658 command_runner.go:130] >   containers_image_ostree_stub
	I0910 18:24:33.862085   42658 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0910 18:24:33.862090   42658 command_runner.go:130] >   btrfs_noversion
	I0910 18:24:33.862106   42658 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0910 18:24:33.862112   42658 command_runner.go:130] >   libdm_no_deferred_remove
	I0910 18:24:33.862116   42658 command_runner.go:130] >   seccomp
	I0910 18:24:33.862120   42658 command_runner.go:130] > LDFlags:          unknown
	I0910 18:24:33.862128   42658 command_runner.go:130] > SeccompEnabled:   true
	I0910 18:24:33.862132   42658 command_runner.go:130] > AppArmorEnabled:  false
	I0910 18:24:33.862245   42658 ssh_runner.go:195] Run: crio --version
	I0910 18:24:33.890427   42658 command_runner.go:130] > crio version 1.29.1
	I0910 18:24:33.890448   42658 command_runner.go:130] > Version:        1.29.1
	I0910 18:24:33.890470   42658 command_runner.go:130] > GitCommit:      unknown
	I0910 18:24:33.890476   42658 command_runner.go:130] > GitCommitDate:  unknown
	I0910 18:24:33.890483   42658 command_runner.go:130] > GitTreeState:   clean
	I0910 18:24:33.890492   42658 command_runner.go:130] > BuildDate:      2024-09-10T02:34:15Z
	I0910 18:24:33.890499   42658 command_runner.go:130] > GoVersion:      go1.21.6
	I0910 18:24:33.890505   42658 command_runner.go:130] > Compiler:       gc
	I0910 18:24:33.890510   42658 command_runner.go:130] > Platform:       linux/amd64
	I0910 18:24:33.890513   42658 command_runner.go:130] > Linkmode:       dynamic
	I0910 18:24:33.890517   42658 command_runner.go:130] > BuildTags:      
	I0910 18:24:33.890522   42658 command_runner.go:130] >   containers_image_ostree_stub
	I0910 18:24:33.890526   42658 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0910 18:24:33.890529   42658 command_runner.go:130] >   btrfs_noversion
	I0910 18:24:33.890534   42658 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0910 18:24:33.890538   42658 command_runner.go:130] >   libdm_no_deferred_remove
	I0910 18:24:33.890542   42658 command_runner.go:130] >   seccomp
	I0910 18:24:33.890545   42658 command_runner.go:130] > LDFlags:          unknown
	I0910 18:24:33.890556   42658 command_runner.go:130] > SeccompEnabled:   true
	I0910 18:24:33.890564   42658 command_runner.go:130] > AppArmorEnabled:  false
	I0910 18:24:33.894093   42658 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:24:33.895283   42658 main.go:141] libmachine: (multinode-925076) Calling .GetIP
	I0910 18:24:33.897859   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:33.898227   42658 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:24:33.898249   42658 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:24:33.898500   42658 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 18:24:33.902837   42658 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0910 18:24:33.902930   42658 kubeadm.go:883] updating cluster {Name:multinode-925076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-925076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:24:33.903078   42658 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:24:33.903134   42658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:24:33.948482   42658 command_runner.go:130] > {
	I0910 18:24:33.948503   42658 command_runner.go:130] >   "images": [
	I0910 18:24:33.948508   42658 command_runner.go:130] >     {
	I0910 18:24:33.948519   42658 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0910 18:24:33.948525   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.948532   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0910 18:24:33.948537   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948543   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.948555   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0910 18:24:33.948569   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0910 18:24:33.948578   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948586   42658 command_runner.go:130] >       "size": "87165492",
	I0910 18:24:33.948596   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.948605   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.948615   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.948627   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.948635   42658 command_runner.go:130] >     },
	I0910 18:24:33.948642   42658 command_runner.go:130] >     {
	I0910 18:24:33.948657   42658 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0910 18:24:33.948667   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.948679   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0910 18:24:33.948687   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948695   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.948711   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0910 18:24:33.948725   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0910 18:24:33.948734   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948742   42658 command_runner.go:130] >       "size": "87190579",
	I0910 18:24:33.948751   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.948765   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.948774   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.948782   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.948790   42658 command_runner.go:130] >     },
	I0910 18:24:33.948797   42658 command_runner.go:130] >     {
	I0910 18:24:33.948811   42658 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0910 18:24:33.948828   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.948840   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0910 18:24:33.948849   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948857   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.948879   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0910 18:24:33.948895   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0910 18:24:33.948904   42658 command_runner.go:130] >       ],
	I0910 18:24:33.948913   42658 command_runner.go:130] >       "size": "1363676",
	I0910 18:24:33.948923   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.948933   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.948942   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.948952   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.948960   42658 command_runner.go:130] >     },
	I0910 18:24:33.948965   42658 command_runner.go:130] >     {
	I0910 18:24:33.948976   42658 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0910 18:24:33.948985   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.948994   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0910 18:24:33.949003   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949010   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949026   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0910 18:24:33.949049   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0910 18:24:33.949058   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949066   42658 command_runner.go:130] >       "size": "31470524",
	I0910 18:24:33.949087   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.949097   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949106   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949114   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949122   42658 command_runner.go:130] >     },
	I0910 18:24:33.949128   42658 command_runner.go:130] >     {
	I0910 18:24:33.949142   42658 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0910 18:24:33.949151   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949160   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0910 18:24:33.949169   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949177   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949192   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0910 18:24:33.949207   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0910 18:24:33.949222   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949233   42658 command_runner.go:130] >       "size": "61245718",
	I0910 18:24:33.949242   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.949250   42658 command_runner.go:130] >       "username": "nonroot",
	I0910 18:24:33.949269   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949279   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949287   42658 command_runner.go:130] >     },
	I0910 18:24:33.949295   42658 command_runner.go:130] >     {
	I0910 18:24:33.949306   42658 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0910 18:24:33.949316   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949324   42658 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0910 18:24:33.949333   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949345   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949360   42658 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0910 18:24:33.949378   42658 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0910 18:24:33.949386   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949394   42658 command_runner.go:130] >       "size": "149009664",
	I0910 18:24:33.949404   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949410   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.949414   42658 command_runner.go:130] >       },
	I0910 18:24:33.949420   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949425   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949432   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949438   42658 command_runner.go:130] >     },
	I0910 18:24:33.949444   42658 command_runner.go:130] >     {
	I0910 18:24:33.949453   42658 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0910 18:24:33.949460   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949465   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0910 18:24:33.949472   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949476   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949483   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0910 18:24:33.949491   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0910 18:24:33.949494   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949503   42658 command_runner.go:130] >       "size": "95233506",
	I0910 18:24:33.949508   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949515   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.949527   42658 command_runner.go:130] >       },
	I0910 18:24:33.949539   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949545   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949552   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949557   42658 command_runner.go:130] >     },
	I0910 18:24:33.949562   42658 command_runner.go:130] >     {
	I0910 18:24:33.949573   42658 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0910 18:24:33.949582   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949590   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0910 18:24:33.949599   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949606   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949641   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0910 18:24:33.949657   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0910 18:24:33.949663   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949671   42658 command_runner.go:130] >       "size": "89437512",
	I0910 18:24:33.949677   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949686   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.949692   42658 command_runner.go:130] >       },
	I0910 18:24:33.949699   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949705   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949711   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949715   42658 command_runner.go:130] >     },
	I0910 18:24:33.949721   42658 command_runner.go:130] >     {
	I0910 18:24:33.949730   42658 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0910 18:24:33.949736   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949744   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0910 18:24:33.949750   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949757   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949772   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0910 18:24:33.949780   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0910 18:24:33.949783   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949787   42658 command_runner.go:130] >       "size": "92728217",
	I0910 18:24:33.949791   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.949794   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949798   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949801   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949809   42658 command_runner.go:130] >     },
	I0910 18:24:33.949812   42658 command_runner.go:130] >     {
	I0910 18:24:33.949817   42658 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0910 18:24:33.949821   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949826   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0910 18:24:33.949829   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949833   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949840   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0910 18:24:33.949848   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0910 18:24:33.949851   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949855   42658 command_runner.go:130] >       "size": "68420936",
	I0910 18:24:33.949859   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949863   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.949866   42658 command_runner.go:130] >       },
	I0910 18:24:33.949870   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949879   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949885   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.949889   42658 command_runner.go:130] >     },
	I0910 18:24:33.949892   42658 command_runner.go:130] >     {
	I0910 18:24:33.949898   42658 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0910 18:24:33.949904   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.949908   42658 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0910 18:24:33.949914   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949918   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.949925   42658 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0910 18:24:33.949932   42658 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0910 18:24:33.949937   42658 command_runner.go:130] >       ],
	I0910 18:24:33.949941   42658 command_runner.go:130] >       "size": "742080",
	I0910 18:24:33.949945   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.949951   42658 command_runner.go:130] >         "value": "65535"
	I0910 18:24:33.949954   42658 command_runner.go:130] >       },
	I0910 18:24:33.949958   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.949962   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.949966   42658 command_runner.go:130] >       "pinned": true
	I0910 18:24:33.949969   42658 command_runner.go:130] >     }
	I0910 18:24:33.949972   42658 command_runner.go:130] >   ]
	I0910 18:24:33.949980   42658 command_runner.go:130] > }
	I0910 18:24:33.950168   42658 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:24:33.950179   42658 crio.go:433] Images already preloaded, skipping extraction
	I0910 18:24:33.950226   42658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:24:33.988606   42658 command_runner.go:130] > {
	I0910 18:24:33.988631   42658 command_runner.go:130] >   "images": [
	I0910 18:24:33.988637   42658 command_runner.go:130] >     {
	I0910 18:24:33.988646   42658 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0910 18:24:33.988651   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.988660   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0910 18:24:33.988666   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988672   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.988699   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0910 18:24:33.988711   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0910 18:24:33.988717   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988723   42658 command_runner.go:130] >       "size": "87165492",
	I0910 18:24:33.988729   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.988739   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.988750   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.988760   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.988766   42658 command_runner.go:130] >     },
	I0910 18:24:33.988770   42658 command_runner.go:130] >     {
	I0910 18:24:33.988780   42658 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0910 18:24:33.988787   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.988795   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0910 18:24:33.988801   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988808   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.988815   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0910 18:24:33.988823   42658 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0910 18:24:33.988827   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988831   42658 command_runner.go:130] >       "size": "87190579",
	I0910 18:24:33.988835   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.988845   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.988851   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.988855   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.988863   42658 command_runner.go:130] >     },
	I0910 18:24:33.988872   42658 command_runner.go:130] >     {
	I0910 18:24:33.988878   42658 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0910 18:24:33.988884   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.988889   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0910 18:24:33.988892   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988896   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.988903   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0910 18:24:33.988911   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0910 18:24:33.988914   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988918   42658 command_runner.go:130] >       "size": "1363676",
	I0910 18:24:33.988922   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.988926   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.988932   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.988937   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.988940   42658 command_runner.go:130] >     },
	I0910 18:24:33.988943   42658 command_runner.go:130] >     {
	I0910 18:24:33.988949   42658 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0910 18:24:33.988955   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.988960   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0910 18:24:33.988966   42658 command_runner.go:130] >       ],
	I0910 18:24:33.988970   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.988977   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0910 18:24:33.988990   42658 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0910 18:24:33.988997   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989001   42658 command_runner.go:130] >       "size": "31470524",
	I0910 18:24:33.989004   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.989008   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989012   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989016   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989019   42658 command_runner.go:130] >     },
	I0910 18:24:33.989023   42658 command_runner.go:130] >     {
	I0910 18:24:33.989029   42658 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0910 18:24:33.989035   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989040   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0910 18:24:33.989043   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989057   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989066   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0910 18:24:33.989094   42658 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0910 18:24:33.989103   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989107   42658 command_runner.go:130] >       "size": "61245718",
	I0910 18:24:33.989111   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.989115   42658 command_runner.go:130] >       "username": "nonroot",
	I0910 18:24:33.989119   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989122   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989126   42658 command_runner.go:130] >     },
	I0910 18:24:33.989129   42658 command_runner.go:130] >     {
	I0910 18:24:33.989135   42658 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0910 18:24:33.989144   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989149   42658 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0910 18:24:33.989154   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989158   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989167   42658 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0910 18:24:33.989173   42658 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0910 18:24:33.989179   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989182   42658 command_runner.go:130] >       "size": "149009664",
	I0910 18:24:33.989186   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989193   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.989199   42658 command_runner.go:130] >       },
	I0910 18:24:33.989203   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989207   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989211   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989214   42658 command_runner.go:130] >     },
	I0910 18:24:33.989217   42658 command_runner.go:130] >     {
	I0910 18:24:33.989223   42658 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0910 18:24:33.989229   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989233   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0910 18:24:33.989237   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989241   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989247   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0910 18:24:33.989256   42658 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0910 18:24:33.989259   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989270   42658 command_runner.go:130] >       "size": "95233506",
	I0910 18:24:33.989276   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989279   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.989283   42658 command_runner.go:130] >       },
	I0910 18:24:33.989286   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989290   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989294   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989297   42658 command_runner.go:130] >     },
	I0910 18:24:33.989303   42658 command_runner.go:130] >     {
	I0910 18:24:33.989308   42658 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0910 18:24:33.989314   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989319   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0910 18:24:33.989323   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989326   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989348   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0910 18:24:33.989358   42658 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0910 18:24:33.989362   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989366   42658 command_runner.go:130] >       "size": "89437512",
	I0910 18:24:33.989370   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989374   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.989377   42658 command_runner.go:130] >       },
	I0910 18:24:33.989381   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989385   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989389   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989392   42658 command_runner.go:130] >     },
	I0910 18:24:33.989395   42658 command_runner.go:130] >     {
	I0910 18:24:33.989401   42658 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0910 18:24:33.989407   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989412   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0910 18:24:33.989415   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989419   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989425   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0910 18:24:33.989437   42658 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0910 18:24:33.989440   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989445   42658 command_runner.go:130] >       "size": "92728217",
	I0910 18:24:33.989450   42658 command_runner.go:130] >       "uid": null,
	I0910 18:24:33.989459   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989465   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989469   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989473   42658 command_runner.go:130] >     },
	I0910 18:24:33.989476   42658 command_runner.go:130] >     {
	I0910 18:24:33.989482   42658 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0910 18:24:33.989486   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989491   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0910 18:24:33.989494   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989497   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989504   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0910 18:24:33.989513   42658 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0910 18:24:33.989519   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989524   42658 command_runner.go:130] >       "size": "68420936",
	I0910 18:24:33.989528   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989532   42658 command_runner.go:130] >         "value": "0"
	I0910 18:24:33.989535   42658 command_runner.go:130] >       },
	I0910 18:24:33.989540   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989545   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989549   42658 command_runner.go:130] >       "pinned": false
	I0910 18:24:33.989553   42658 command_runner.go:130] >     },
	I0910 18:24:33.989556   42658 command_runner.go:130] >     {
	I0910 18:24:33.989562   42658 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0910 18:24:33.989568   42658 command_runner.go:130] >       "repoTags": [
	I0910 18:24:33.989573   42658 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0910 18:24:33.989576   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989579   42658 command_runner.go:130] >       "repoDigests": [
	I0910 18:24:33.989586   42658 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0910 18:24:33.989593   42658 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0910 18:24:33.989603   42658 command_runner.go:130] >       ],
	I0910 18:24:33.989607   42658 command_runner.go:130] >       "size": "742080",
	I0910 18:24:33.989611   42658 command_runner.go:130] >       "uid": {
	I0910 18:24:33.989615   42658 command_runner.go:130] >         "value": "65535"
	I0910 18:24:33.989618   42658 command_runner.go:130] >       },
	I0910 18:24:33.989622   42658 command_runner.go:130] >       "username": "",
	I0910 18:24:33.989628   42658 command_runner.go:130] >       "spec": null,
	I0910 18:24:33.989636   42658 command_runner.go:130] >       "pinned": true
	I0910 18:24:33.989641   42658 command_runner.go:130] >     }
	I0910 18:24:33.989645   42658 command_runner.go:130] >   ]
	I0910 18:24:33.989648   42658 command_runner.go:130] > }
	I0910 18:24:33.989759   42658 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:24:33.989769   42658 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:24:33.989775   42658 kubeadm.go:934] updating node { 192.168.39.248 8443 v1.31.0 crio true true} ...
	I0910 18:24:33.989898   42658 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-925076 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-925076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:24:33.989982   42658 ssh_runner.go:195] Run: crio config
	I0910 18:24:34.023136   42658 command_runner.go:130] ! time="2024-09-10 18:24:33.995732341Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0910 18:24:34.029958   42658 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0910 18:24:34.034778   42658 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0910 18:24:34.034802   42658 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0910 18:24:34.034813   42658 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0910 18:24:34.034819   42658 command_runner.go:130] > #
	I0910 18:24:34.034830   42658 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0910 18:24:34.034840   42658 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0910 18:24:34.034850   42658 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0910 18:24:34.034864   42658 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0910 18:24:34.034870   42658 command_runner.go:130] > # reload'.
	I0910 18:24:34.034880   42658 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0910 18:24:34.034892   42658 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0910 18:24:34.034901   42658 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0910 18:24:34.034910   42658 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0910 18:24:34.034919   42658 command_runner.go:130] > [crio]
	I0910 18:24:34.034928   42658 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0910 18:24:34.034938   42658 command_runner.go:130] > # containers images, in this directory.
	I0910 18:24:34.034958   42658 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0910 18:24:34.034976   42658 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0910 18:24:34.034985   42658 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0910 18:24:34.034998   42658 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0910 18:24:34.035007   42658 command_runner.go:130] > # imagestore = ""
	I0910 18:24:34.035017   42658 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0910 18:24:34.035028   42658 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0910 18:24:34.035038   42658 command_runner.go:130] > storage_driver = "overlay"
	I0910 18:24:34.035049   42658 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0910 18:24:34.035060   42658 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0910 18:24:34.035069   42658 command_runner.go:130] > storage_option = [
	I0910 18:24:34.035076   42658 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0910 18:24:34.035079   42658 command_runner.go:130] > ]
	I0910 18:24:34.035087   42658 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0910 18:24:34.035093   42658 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0910 18:24:34.035104   42658 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0910 18:24:34.035121   42658 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0910 18:24:34.035134   42658 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0910 18:24:34.035140   42658 command_runner.go:130] > # always happen on a node reboot
	I0910 18:24:34.035145   42658 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0910 18:24:34.035159   42658 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0910 18:24:34.035166   42658 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0910 18:24:34.035173   42658 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0910 18:24:34.035178   42658 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0910 18:24:34.035185   42658 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0910 18:24:34.035196   42658 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0910 18:24:34.035200   42658 command_runner.go:130] > # internal_wipe = true
	I0910 18:24:34.035207   42658 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0910 18:24:34.035214   42658 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0910 18:24:34.035218   42658 command_runner.go:130] > # internal_repair = false
	I0910 18:24:34.035225   42658 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0910 18:24:34.035231   42658 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0910 18:24:34.035236   42658 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0910 18:24:34.035241   42658 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0910 18:24:34.035247   42658 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0910 18:24:34.035253   42658 command_runner.go:130] > [crio.api]
	I0910 18:24:34.035265   42658 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0910 18:24:34.035272   42658 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0910 18:24:34.035277   42658 command_runner.go:130] > # IP address on which the stream server will listen.
	I0910 18:24:34.035284   42658 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0910 18:24:34.035290   42658 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0910 18:24:34.035297   42658 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0910 18:24:34.035306   42658 command_runner.go:130] > # stream_port = "0"
	I0910 18:24:34.035313   42658 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0910 18:24:34.035317   42658 command_runner.go:130] > # stream_enable_tls = false
	I0910 18:24:34.035322   42658 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0910 18:24:34.035329   42658 command_runner.go:130] > # stream_idle_timeout = ""
	I0910 18:24:34.035340   42658 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0910 18:24:34.035348   42658 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0910 18:24:34.035352   42658 command_runner.go:130] > # minutes.
	I0910 18:24:34.035356   42658 command_runner.go:130] > # stream_tls_cert = ""
	I0910 18:24:34.035362   42658 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0910 18:24:34.035370   42658 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0910 18:24:34.035374   42658 command_runner.go:130] > # stream_tls_key = ""
	I0910 18:24:34.035381   42658 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0910 18:24:34.035387   42658 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0910 18:24:34.035407   42658 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0910 18:24:34.035413   42658 command_runner.go:130] > # stream_tls_ca = ""
	I0910 18:24:34.035420   42658 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0910 18:24:34.035427   42658 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0910 18:24:34.035434   42658 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0910 18:24:34.035440   42658 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0910 18:24:34.035446   42658 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0910 18:24:34.035453   42658 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0910 18:24:34.035457   42658 command_runner.go:130] > [crio.runtime]
	I0910 18:24:34.035465   42658 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0910 18:24:34.035470   42658 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0910 18:24:34.035476   42658 command_runner.go:130] > # "nofile=1024:2048"
	I0910 18:24:34.035481   42658 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0910 18:24:34.035486   42658 command_runner.go:130] > # default_ulimits = [
	I0910 18:24:34.035489   42658 command_runner.go:130] > # ]
	I0910 18:24:34.035494   42658 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0910 18:24:34.035504   42658 command_runner.go:130] > # no_pivot = false
	I0910 18:24:34.035510   42658 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0910 18:24:34.035516   42658 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0910 18:24:34.035522   42658 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0910 18:24:34.035528   42658 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0910 18:24:34.035535   42658 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0910 18:24:34.035541   42658 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0910 18:24:34.035547   42658 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0910 18:24:34.035553   42658 command_runner.go:130] > # Cgroup setting for conmon
	I0910 18:24:34.035561   42658 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0910 18:24:34.035565   42658 command_runner.go:130] > conmon_cgroup = "pod"
	I0910 18:24:34.035572   42658 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0910 18:24:34.035577   42658 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0910 18:24:34.035587   42658 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0910 18:24:34.035591   42658 command_runner.go:130] > conmon_env = [
	I0910 18:24:34.035599   42658 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0910 18:24:34.035601   42658 command_runner.go:130] > ]
	I0910 18:24:34.035608   42658 command_runner.go:130] > # Additional environment variables to set for all the
	I0910 18:24:34.035615   42658 command_runner.go:130] > # containers. These are overridden if set in the
	I0910 18:24:34.035621   42658 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0910 18:24:34.035625   42658 command_runner.go:130] > # default_env = [
	I0910 18:24:34.035628   42658 command_runner.go:130] > # ]
	I0910 18:24:34.035633   42658 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0910 18:24:34.035639   42658 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0910 18:24:34.035643   42658 command_runner.go:130] > # selinux = false
	I0910 18:24:34.035648   42658 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0910 18:24:34.035653   42658 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0910 18:24:34.035658   42658 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0910 18:24:34.035662   42658 command_runner.go:130] > # seccomp_profile = ""
	I0910 18:24:34.035666   42658 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0910 18:24:34.035671   42658 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0910 18:24:34.035677   42658 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0910 18:24:34.035681   42658 command_runner.go:130] > # which might increase security.
	I0910 18:24:34.035684   42658 command_runner.go:130] > # This option is currently deprecated,
	I0910 18:24:34.035690   42658 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0910 18:24:34.035694   42658 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0910 18:24:34.035704   42658 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0910 18:24:34.035712   42658 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0910 18:24:34.035718   42658 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0910 18:24:34.035725   42658 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0910 18:24:34.035729   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.035736   42658 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0910 18:24:34.035741   42658 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0910 18:24:34.035748   42658 command_runner.go:130] > # the cgroup blockio controller.
	I0910 18:24:34.035752   42658 command_runner.go:130] > # blockio_config_file = ""
	I0910 18:24:34.035761   42658 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0910 18:24:34.035765   42658 command_runner.go:130] > # blockio parameters.
	I0910 18:24:34.035770   42658 command_runner.go:130] > # blockio_reload = false
	I0910 18:24:34.035777   42658 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0910 18:24:34.035782   42658 command_runner.go:130] > # irqbalance daemon.
	I0910 18:24:34.035787   42658 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0910 18:24:34.035796   42658 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0910 18:24:34.035802   42658 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0910 18:24:34.035809   42658 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0910 18:24:34.035814   42658 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0910 18:24:34.035821   42658 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0910 18:24:34.035826   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.035833   42658 command_runner.go:130] > # rdt_config_file = ""
	I0910 18:24:34.035838   42658 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0910 18:24:34.035844   42658 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0910 18:24:34.035878   42658 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0910 18:24:34.035889   42658 command_runner.go:130] > # separate_pull_cgroup = ""
	I0910 18:24:34.035898   42658 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0910 18:24:34.035906   42658 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0910 18:24:34.035915   42658 command_runner.go:130] > # will be added.
	I0910 18:24:34.035922   42658 command_runner.go:130] > # default_capabilities = [
	I0910 18:24:34.035930   42658 command_runner.go:130] > # 	"CHOWN",
	I0910 18:24:34.035936   42658 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0910 18:24:34.035943   42658 command_runner.go:130] > # 	"FSETID",
	I0910 18:24:34.035949   42658 command_runner.go:130] > # 	"FOWNER",
	I0910 18:24:34.035957   42658 command_runner.go:130] > # 	"SETGID",
	I0910 18:24:34.035963   42658 command_runner.go:130] > # 	"SETUID",
	I0910 18:24:34.035975   42658 command_runner.go:130] > # 	"SETPCAP",
	I0910 18:24:34.035984   42658 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0910 18:24:34.035987   42658 command_runner.go:130] > # 	"KILL",
	I0910 18:24:34.035990   42658 command_runner.go:130] > # ]
	I0910 18:24:34.035997   42658 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0910 18:24:34.036006   42658 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0910 18:24:34.036010   42658 command_runner.go:130] > # add_inheritable_capabilities = false
	I0910 18:24:34.036019   42658 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0910 18:24:34.036024   42658 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0910 18:24:34.036030   42658 command_runner.go:130] > default_sysctls = [
	I0910 18:24:34.036035   42658 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0910 18:24:34.036040   42658 command_runner.go:130] > ]
	I0910 18:24:34.036044   42658 command_runner.go:130] > # List of devices on the host that a
	I0910 18:24:34.036052   42658 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0910 18:24:34.036056   42658 command_runner.go:130] > # allowed_devices = [
	I0910 18:24:34.036061   42658 command_runner.go:130] > # 	"/dev/fuse",
	I0910 18:24:34.036064   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036071   42658 command_runner.go:130] > # List of additional devices. specified as
	I0910 18:24:34.036077   42658 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0910 18:24:34.036084   42658 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0910 18:24:34.036092   42658 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0910 18:24:34.036098   42658 command_runner.go:130] > # additional_devices = [
	I0910 18:24:34.036101   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036106   42658 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0910 18:24:34.036112   42658 command_runner.go:130] > # cdi_spec_dirs = [
	I0910 18:24:34.036115   42658 command_runner.go:130] > # 	"/etc/cdi",
	I0910 18:24:34.036119   42658 command_runner.go:130] > # 	"/var/run/cdi",
	I0910 18:24:34.036122   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036128   42658 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0910 18:24:34.036136   42658 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0910 18:24:34.036141   42658 command_runner.go:130] > # Defaults to false.
	I0910 18:24:34.036148   42658 command_runner.go:130] > # device_ownership_from_security_context = false
	I0910 18:24:34.036154   42658 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0910 18:24:34.036162   42658 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0910 18:24:34.036165   42658 command_runner.go:130] > # hooks_dir = [
	I0910 18:24:34.036169   42658 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0910 18:24:34.036181   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036189   42658 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0910 18:24:34.036195   42658 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0910 18:24:34.036201   42658 command_runner.go:130] > # its default mounts from the following two files:
	I0910 18:24:34.036205   42658 command_runner.go:130] > #
	I0910 18:24:34.036211   42658 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0910 18:24:34.036218   42658 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0910 18:24:34.036223   42658 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0910 18:24:34.036228   42658 command_runner.go:130] > #
	I0910 18:24:34.036234   42658 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0910 18:24:34.036242   42658 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0910 18:24:34.036248   42658 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0910 18:24:34.036255   42658 command_runner.go:130] > #      only add mounts it finds in this file.
	I0910 18:24:34.036258   42658 command_runner.go:130] > #
	I0910 18:24:34.036262   42658 command_runner.go:130] > # default_mounts_file = ""
	I0910 18:24:34.036266   42658 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0910 18:24:34.036273   42658 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0910 18:24:34.036279   42658 command_runner.go:130] > pids_limit = 1024
	I0910 18:24:34.036284   42658 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0910 18:24:34.036292   42658 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0910 18:24:34.036298   42658 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0910 18:24:34.036311   42658 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0910 18:24:34.036315   42658 command_runner.go:130] > # log_size_max = -1
	I0910 18:24:34.036323   42658 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0910 18:24:34.036331   42658 command_runner.go:130] > # log_to_journald = false
	I0910 18:24:34.036337   42658 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0910 18:24:34.036343   42658 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0910 18:24:34.036349   42658 command_runner.go:130] > # Path to directory for container attach sockets.
	I0910 18:24:34.036354   42658 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0910 18:24:34.036359   42658 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0910 18:24:34.036365   42658 command_runner.go:130] > # bind_mount_prefix = ""
	I0910 18:24:34.036370   42658 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0910 18:24:34.036376   42658 command_runner.go:130] > # read_only = false
	I0910 18:24:34.036382   42658 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0910 18:24:34.036388   42658 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0910 18:24:34.036392   42658 command_runner.go:130] > # live configuration reload.
	I0910 18:24:34.036401   42658 command_runner.go:130] > # log_level = "info"
	I0910 18:24:34.036409   42658 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0910 18:24:34.036414   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.036418   42658 command_runner.go:130] > # log_filter = ""
	I0910 18:24:34.036423   42658 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0910 18:24:34.036434   42658 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0910 18:24:34.036440   42658 command_runner.go:130] > # separated by comma.
	I0910 18:24:34.036447   42658 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0910 18:24:34.036454   42658 command_runner.go:130] > # uid_mappings = ""
	I0910 18:24:34.036459   42658 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0910 18:24:34.036467   42658 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0910 18:24:34.036471   42658 command_runner.go:130] > # separated by comma.
	I0910 18:24:34.036480   42658 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0910 18:24:34.036484   42658 command_runner.go:130] > # gid_mappings = ""
	I0910 18:24:34.036490   42658 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0910 18:24:34.036498   42658 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0910 18:24:34.036504   42658 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0910 18:24:34.036513   42658 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0910 18:24:34.036517   42658 command_runner.go:130] > # minimum_mappable_uid = -1
	I0910 18:24:34.036523   42658 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0910 18:24:34.036530   42658 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0910 18:24:34.036536   42658 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0910 18:24:34.036545   42658 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0910 18:24:34.036552   42658 command_runner.go:130] > # minimum_mappable_gid = -1
	I0910 18:24:34.036561   42658 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0910 18:24:34.036567   42658 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0910 18:24:34.036574   42658 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0910 18:24:34.036578   42658 command_runner.go:130] > # ctr_stop_timeout = 30
	I0910 18:24:34.036585   42658 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0910 18:24:34.036591   42658 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0910 18:24:34.036596   42658 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0910 18:24:34.036603   42658 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0910 18:24:34.036607   42658 command_runner.go:130] > drop_infra_ctr = false
	I0910 18:24:34.036612   42658 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0910 18:24:34.036620   42658 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0910 18:24:34.036627   42658 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0910 18:24:34.036637   42658 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0910 18:24:34.036645   42658 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0910 18:24:34.036653   42658 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0910 18:24:34.036658   42658 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0910 18:24:34.036665   42658 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0910 18:24:34.036668   42658 command_runner.go:130] > # shared_cpuset = ""
	I0910 18:24:34.036674   42658 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0910 18:24:34.036680   42658 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0910 18:24:34.036685   42658 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0910 18:24:34.036691   42658 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0910 18:24:34.036698   42658 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0910 18:24:34.036704   42658 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0910 18:24:34.036712   42658 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0910 18:24:34.036716   42658 command_runner.go:130] > # enable_criu_support = false
	I0910 18:24:34.036720   42658 command_runner.go:130] > # Enable/disable the generation of the container,
	I0910 18:24:34.036733   42658 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0910 18:24:34.036739   42658 command_runner.go:130] > # enable_pod_events = false
	I0910 18:24:34.036750   42658 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0910 18:24:34.036758   42658 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0910 18:24:34.036763   42658 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0910 18:24:34.036770   42658 command_runner.go:130] > # default_runtime = "runc"
	I0910 18:24:34.036775   42658 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0910 18:24:34.036784   42658 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0910 18:24:34.036794   42658 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0910 18:24:34.036803   42658 command_runner.go:130] > # creation as a file is not desired either.
	I0910 18:24:34.036813   42658 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0910 18:24:34.036820   42658 command_runner.go:130] > # the hostname is being managed dynamically.
	I0910 18:24:34.036824   42658 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0910 18:24:34.036830   42658 command_runner.go:130] > # ]
	I0910 18:24:34.036836   42658 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0910 18:24:34.036844   42658 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0910 18:24:34.036850   42658 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0910 18:24:34.036859   42658 command_runner.go:130] > # Each entry in the table should follow the format:
	I0910 18:24:34.036863   42658 command_runner.go:130] > #
	I0910 18:24:34.036870   42658 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0910 18:24:34.036880   42658 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0910 18:24:34.036940   42658 command_runner.go:130] > # runtime_type = "oci"
	I0910 18:24:34.036950   42658 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0910 18:24:34.036955   42658 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0910 18:24:34.036959   42658 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0910 18:24:34.036963   42658 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0910 18:24:34.036967   42658 command_runner.go:130] > # monitor_env = []
	I0910 18:24:34.036971   42658 command_runner.go:130] > # privileged_without_host_devices = false
	I0910 18:24:34.036978   42658 command_runner.go:130] > # allowed_annotations = []
	I0910 18:24:34.036983   42658 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0910 18:24:34.036989   42658 command_runner.go:130] > # Where:
	I0910 18:24:34.036994   42658 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0910 18:24:34.037002   42658 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0910 18:24:34.037008   42658 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0910 18:24:34.037016   42658 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0910 18:24:34.037020   42658 command_runner.go:130] > #   in $PATH.
	I0910 18:24:34.037026   42658 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0910 18:24:34.037033   42658 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0910 18:24:34.037039   42658 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0910 18:24:34.037044   42658 command_runner.go:130] > #   state.
	I0910 18:24:34.037050   42658 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0910 18:24:34.037057   42658 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0910 18:24:34.037063   42658 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0910 18:24:34.037068   42658 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0910 18:24:34.037086   42658 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0910 18:24:34.037100   42658 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0910 18:24:34.037112   42658 command_runner.go:130] > #   The currently recognized values are:
	I0910 18:24:34.037121   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0910 18:24:34.037128   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0910 18:24:34.037136   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0910 18:24:34.037142   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0910 18:24:34.037151   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0910 18:24:34.037157   42658 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0910 18:24:34.037165   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0910 18:24:34.037171   42658 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0910 18:24:34.037179   42658 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0910 18:24:34.037185   42658 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0910 18:24:34.037197   42658 command_runner.go:130] > #   deprecated option "conmon".
	I0910 18:24:34.037206   42658 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0910 18:24:34.037211   42658 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0910 18:24:34.037221   42658 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0910 18:24:34.037228   42658 command_runner.go:130] > #   should be moved to the container's cgroup
	I0910 18:24:34.037234   42658 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0910 18:24:34.037242   42658 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0910 18:24:34.037248   42658 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0910 18:24:34.037257   42658 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0910 18:24:34.037262   42658 command_runner.go:130] > #
	I0910 18:24:34.037269   42658 command_runner.go:130] > # Using the seccomp notifier feature:
	I0910 18:24:34.037276   42658 command_runner.go:130] > #
	I0910 18:24:34.037286   42658 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0910 18:24:34.037297   42658 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0910 18:24:34.037309   42658 command_runner.go:130] > #
	I0910 18:24:34.037318   42658 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0910 18:24:34.037330   42658 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0910 18:24:34.037337   42658 command_runner.go:130] > #
	I0910 18:24:34.037346   42658 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0910 18:24:34.037355   42658 command_runner.go:130] > # feature.
	I0910 18:24:34.037361   42658 command_runner.go:130] > #
	I0910 18:24:34.037372   42658 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0910 18:24:34.037382   42658 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0910 18:24:34.037392   42658 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0910 18:24:34.037406   42658 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0910 18:24:34.037418   42658 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0910 18:24:34.037425   42658 command_runner.go:130] > #
	I0910 18:24:34.037435   42658 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0910 18:24:34.037447   42658 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0910 18:24:34.037452   42658 command_runner.go:130] > #
	I0910 18:24:34.037463   42658 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0910 18:24:34.037474   42658 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0910 18:24:34.037481   42658 command_runner.go:130] > #
	I0910 18:24:34.037491   42658 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0910 18:24:34.037503   42658 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0910 18:24:34.037511   42658 command_runner.go:130] > # limitation.
	I0910 18:24:34.037526   42658 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0910 18:24:34.037536   42658 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0910 18:24:34.037542   42658 command_runner.go:130] > runtime_type = "oci"
	I0910 18:24:34.037549   42658 command_runner.go:130] > runtime_root = "/run/runc"
	I0910 18:24:34.037558   42658 command_runner.go:130] > runtime_config_path = ""
	I0910 18:24:34.037565   42658 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0910 18:24:34.037574   42658 command_runner.go:130] > monitor_cgroup = "pod"
	I0910 18:24:34.037581   42658 command_runner.go:130] > monitor_exec_cgroup = ""
	I0910 18:24:34.037590   42658 command_runner.go:130] > monitor_env = [
	I0910 18:24:34.037598   42658 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0910 18:24:34.037605   42658 command_runner.go:130] > ]
	I0910 18:24:34.037614   42658 command_runner.go:130] > privileged_without_host_devices = false
	I0910 18:24:34.037627   42658 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0910 18:24:34.037637   42658 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0910 18:24:34.037652   42658 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0910 18:24:34.037666   42658 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0910 18:24:34.037679   42658 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0910 18:24:34.037691   42658 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0910 18:24:34.037708   42658 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0910 18:24:34.037722   42658 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0910 18:24:34.037730   42658 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0910 18:24:34.037741   42658 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0910 18:24:34.037746   42658 command_runner.go:130] > # Example:
	I0910 18:24:34.037753   42658 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0910 18:24:34.037760   42658 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0910 18:24:34.037770   42658 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0910 18:24:34.037778   42658 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0910 18:24:34.037782   42658 command_runner.go:130] > # cpuset = 0
	I0910 18:24:34.037788   42658 command_runner.go:130] > # cpushares = "0-1"
	I0910 18:24:34.037793   42658 command_runner.go:130] > # Where:
	I0910 18:24:34.037800   42658 command_runner.go:130] > # The workload name is workload-type.
	I0910 18:24:34.037810   42658 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0910 18:24:34.037819   42658 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0910 18:24:34.037827   42658 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0910 18:24:34.037839   42658 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0910 18:24:34.037847   42658 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0910 18:24:34.037860   42658 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0910 18:24:34.037870   42658 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0910 18:24:34.037877   42658 command_runner.go:130] > # Default value is set to true
	I0910 18:24:34.037883   42658 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0910 18:24:34.037890   42658 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0910 18:24:34.037897   42658 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0910 18:24:34.037904   42658 command_runner.go:130] > # Default value is set to 'false'
	I0910 18:24:34.037910   42658 command_runner.go:130] > # disable_hostport_mapping = false
	I0910 18:24:34.037919   42658 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0910 18:24:34.037927   42658 command_runner.go:130] > #
	I0910 18:24:34.037936   42658 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0910 18:24:34.037947   42658 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0910 18:24:34.037960   42658 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0910 18:24:34.037973   42658 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0910 18:24:34.037984   42658 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0910 18:24:34.037992   42658 command_runner.go:130] > [crio.image]
	I0910 18:24:34.038001   42658 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0910 18:24:34.038011   42658 command_runner.go:130] > # default_transport = "docker://"
	I0910 18:24:34.038020   42658 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0910 18:24:34.038032   42658 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0910 18:24:34.038042   42658 command_runner.go:130] > # global_auth_file = ""
	I0910 18:24:34.038050   42658 command_runner.go:130] > # The image used to instantiate infra containers.
	I0910 18:24:34.038060   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.038067   42658 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0910 18:24:34.038080   42658 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0910 18:24:34.038091   42658 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0910 18:24:34.038098   42658 command_runner.go:130] > # This option supports live configuration reload.
	I0910 18:24:34.038111   42658 command_runner.go:130] > # pause_image_auth_file = ""
	I0910 18:24:34.038123   42658 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0910 18:24:34.038135   42658 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0910 18:24:34.038147   42658 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0910 18:24:34.038156   42658 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0910 18:24:34.038165   42658 command_runner.go:130] > # pause_command = "/pause"
	I0910 18:24:34.038176   42658 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0910 18:24:34.038188   42658 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0910 18:24:34.038197   42658 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0910 18:24:34.038217   42658 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0910 18:24:34.038229   42658 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0910 18:24:34.038240   42658 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0910 18:24:34.038248   42658 command_runner.go:130] > # pinned_images = [
	I0910 18:24:34.038252   42658 command_runner.go:130] > # ]
	I0910 18:24:34.038257   42658 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0910 18:24:34.038263   42658 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0910 18:24:34.038270   42658 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0910 18:24:34.038275   42658 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0910 18:24:34.038281   42658 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0910 18:24:34.038285   42658 command_runner.go:130] > # signature_policy = ""
	I0910 18:24:34.038292   42658 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0910 18:24:34.038303   42658 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0910 18:24:34.038311   42658 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0910 18:24:34.038317   42658 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0910 18:24:34.038324   42658 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0910 18:24:34.038329   42658 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0910 18:24:34.038337   42658 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0910 18:24:34.038343   42658 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0910 18:24:34.038349   42658 command_runner.go:130] > # changing them here.
	I0910 18:24:34.038353   42658 command_runner.go:130] > # insecure_registries = [
	I0910 18:24:34.038356   42658 command_runner.go:130] > # ]
	I0910 18:24:34.038362   42658 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0910 18:24:34.038369   42658 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0910 18:24:34.038373   42658 command_runner.go:130] > # image_volumes = "mkdir"
	I0910 18:24:34.038379   42658 command_runner.go:130] > # Temporary directory to use for storing big files
	I0910 18:24:34.038385   42658 command_runner.go:130] > # big_files_temporary_dir = ""
	I0910 18:24:34.038393   42658 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0910 18:24:34.038399   42658 command_runner.go:130] > # CNI plugins.
	I0910 18:24:34.038402   42658 command_runner.go:130] > [crio.network]
	I0910 18:24:34.038408   42658 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0910 18:24:34.038415   42658 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0910 18:24:34.038419   42658 command_runner.go:130] > # cni_default_network = ""
	I0910 18:24:34.038425   42658 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0910 18:24:34.038430   42658 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0910 18:24:34.038437   42658 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0910 18:24:34.038451   42658 command_runner.go:130] > # plugin_dirs = [
	I0910 18:24:34.038457   42658 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0910 18:24:34.038460   42658 command_runner.go:130] > # ]
	I0910 18:24:34.038466   42658 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0910 18:24:34.038471   42658 command_runner.go:130] > [crio.metrics]
	I0910 18:24:34.038475   42658 command_runner.go:130] > # Globally enable or disable metrics support.
	I0910 18:24:34.038479   42658 command_runner.go:130] > enable_metrics = true
	I0910 18:24:34.038486   42658 command_runner.go:130] > # Specify enabled metrics collectors.
	I0910 18:24:34.038491   42658 command_runner.go:130] > # Per default all metrics are enabled.
	I0910 18:24:34.038500   42658 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0910 18:24:34.038506   42658 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0910 18:24:34.038513   42658 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0910 18:24:34.038517   42658 command_runner.go:130] > # metrics_collectors = [
	I0910 18:24:34.038523   42658 command_runner.go:130] > # 	"operations",
	I0910 18:24:34.038528   42658 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0910 18:24:34.038532   42658 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0910 18:24:34.038536   42658 command_runner.go:130] > # 	"operations_errors",
	I0910 18:24:34.038540   42658 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0910 18:24:34.038546   42658 command_runner.go:130] > # 	"image_pulls_by_name",
	I0910 18:24:34.038551   42658 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0910 18:24:34.038558   42658 command_runner.go:130] > # 	"image_pulls_failures",
	I0910 18:24:34.038562   42658 command_runner.go:130] > # 	"image_pulls_successes",
	I0910 18:24:34.038568   42658 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0910 18:24:34.038572   42658 command_runner.go:130] > # 	"image_layer_reuse",
	I0910 18:24:34.038576   42658 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0910 18:24:34.038580   42658 command_runner.go:130] > # 	"containers_oom_total",
	I0910 18:24:34.038584   42658 command_runner.go:130] > # 	"containers_oom",
	I0910 18:24:34.038588   42658 command_runner.go:130] > # 	"processes_defunct",
	I0910 18:24:34.038592   42658 command_runner.go:130] > # 	"operations_total",
	I0910 18:24:34.038596   42658 command_runner.go:130] > # 	"operations_latency_seconds",
	I0910 18:24:34.038603   42658 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0910 18:24:34.038607   42658 command_runner.go:130] > # 	"operations_errors_total",
	I0910 18:24:34.038614   42658 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0910 18:24:34.038618   42658 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0910 18:24:34.038622   42658 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0910 18:24:34.038626   42658 command_runner.go:130] > # 	"image_pulls_success_total",
	I0910 18:24:34.038636   42658 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0910 18:24:34.038643   42658 command_runner.go:130] > # 	"containers_oom_count_total",
	I0910 18:24:34.038647   42658 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0910 18:24:34.038653   42658 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0910 18:24:34.038657   42658 command_runner.go:130] > # ]
	I0910 18:24:34.038662   42658 command_runner.go:130] > # The port on which the metrics server will listen.
	I0910 18:24:34.038666   42658 command_runner.go:130] > # metrics_port = 9090
	I0910 18:24:34.038670   42658 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0910 18:24:34.038675   42658 command_runner.go:130] > # metrics_socket = ""
	I0910 18:24:34.038681   42658 command_runner.go:130] > # The certificate for the secure metrics server.
	I0910 18:24:34.038687   42658 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0910 18:24:34.038695   42658 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0910 18:24:34.038699   42658 command_runner.go:130] > # certificate on any modification event.
	I0910 18:24:34.038708   42658 command_runner.go:130] > # metrics_cert = ""
	I0910 18:24:34.038713   42658 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0910 18:24:34.038725   42658 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0910 18:24:34.038731   42658 command_runner.go:130] > # metrics_key = ""
	I0910 18:24:34.038741   42658 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0910 18:24:34.038747   42658 command_runner.go:130] > [crio.tracing]
	I0910 18:24:34.038752   42658 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0910 18:24:34.038759   42658 command_runner.go:130] > # enable_tracing = false
	I0910 18:24:34.038763   42658 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0910 18:24:34.038767   42658 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0910 18:24:34.038776   42658 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0910 18:24:34.038780   42658 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0910 18:24:34.038784   42658 command_runner.go:130] > # CRI-O NRI configuration.
	I0910 18:24:34.038789   42658 command_runner.go:130] > [crio.nri]
	I0910 18:24:34.038794   42658 command_runner.go:130] > # Globally enable or disable NRI.
	I0910 18:24:34.038798   42658 command_runner.go:130] > # enable_nri = false
	I0910 18:24:34.038802   42658 command_runner.go:130] > # NRI socket to listen on.
	I0910 18:24:34.038808   42658 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0910 18:24:34.038813   42658 command_runner.go:130] > # NRI plugin directory to use.
	I0910 18:24:34.038820   42658 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0910 18:24:34.038825   42658 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0910 18:24:34.038835   42658 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0910 18:24:34.038842   42658 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0910 18:24:34.038851   42658 command_runner.go:130] > # nri_disable_connections = false
	I0910 18:24:34.038861   42658 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0910 18:24:34.038869   42658 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0910 18:24:34.038876   42658 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0910 18:24:34.038886   42658 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0910 18:24:34.038895   42658 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0910 18:24:34.038903   42658 command_runner.go:130] > [crio.stats]
	I0910 18:24:34.038915   42658 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0910 18:24:34.038925   42658 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0910 18:24:34.038934   42658 command_runner.go:130] > # stats_collection_period = 0
	I0910 18:24:34.039136   42658 cni.go:84] Creating CNI manager for ""
	I0910 18:24:34.039153   42658 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0910 18:24:34.039172   42658 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:24:34.039193   42658 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-925076 NodeName:multinode-925076 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:24:34.039343   42658 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-925076"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:24:34.039402   42658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:24:34.050273   42658 command_runner.go:130] > kubeadm
	I0910 18:24:34.050294   42658 command_runner.go:130] > kubectl
	I0910 18:24:34.050298   42658 command_runner.go:130] > kubelet
	I0910 18:24:34.050321   42658 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:24:34.050401   42658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:24:34.060802   42658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0910 18:24:34.077840   42658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:24:34.094446   42658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0910 18:24:34.110951   42658 ssh_runner.go:195] Run: grep 192.168.39.248	control-plane.minikube.internal$ /etc/hosts
	I0910 18:24:34.115291   42658 command_runner.go:130] > 192.168.39.248	control-plane.minikube.internal
	I0910 18:24:34.115371   42658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:24:34.253785   42658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:24:34.268947   42658 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076 for IP: 192.168.39.248
	I0910 18:24:34.268980   42658 certs.go:194] generating shared ca certs ...
	I0910 18:24:34.269000   42658 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:24:34.269203   42658 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:24:34.269246   42658 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:24:34.269256   42658 certs.go:256] generating profile certs ...
	I0910 18:24:34.269343   42658 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/client.key
	I0910 18:24:34.269392   42658 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.key.b9c1a60e
	I0910 18:24:34.269440   42658 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.key
	I0910 18:24:34.269451   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 18:24:34.269462   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 18:24:34.269472   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 18:24:34.269490   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 18:24:34.269502   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 18:24:34.269513   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 18:24:34.269525   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 18:24:34.269536   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 18:24:34.269591   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:24:34.269617   42658 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:24:34.269626   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:24:34.269648   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:24:34.269669   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:24:34.269690   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:24:34.269726   42658 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:24:34.269750   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.269762   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.269774   42658 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem -> /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.271237   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:24:34.295596   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:24:34.318217   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:24:34.341332   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:24:34.364832   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:24:34.388027   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:24:34.411338   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:24:34.434423   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/multinode-925076/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:24:34.457823   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:24:34.480374   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:24:34.503236   42658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:24:34.525380   42658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:24:34.541968   42658 ssh_runner.go:195] Run: openssl version
	I0910 18:24:34.548233   42658 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0910 18:24:34.548316   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:24:34.559413   42658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.563715   42658 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.563934   42658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.563983   42658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:24:34.569348   42658 command_runner.go:130] > b5213941
	I0910 18:24:34.569403   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:24:34.578712   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:24:34.589392   42658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.593690   42658 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.593758   42658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.593807   42658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:24:34.599246   42658 command_runner.go:130] > 51391683
	I0910 18:24:34.599303   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:24:34.608977   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:24:34.619679   42658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.623904   42658 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.623968   42658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.624013   42658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:24:34.629451   42658 command_runner.go:130] > 3ec20f2e
	I0910 18:24:34.629515   42658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:24:34.638807   42658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:24:34.643472   42658 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:24:34.643489   42658 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0910 18:24:34.643495   42658 command_runner.go:130] > Device: 253,1	Inode: 532758      Links: 1
	I0910 18:24:34.643503   42658 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0910 18:24:34.643519   42658 command_runner.go:130] > Access: 2024-09-10 18:17:54.803795567 +0000
	I0910 18:24:34.643530   42658 command_runner.go:130] > Modify: 2024-09-10 18:17:54.803795567 +0000
	I0910 18:24:34.643538   42658 command_runner.go:130] > Change: 2024-09-10 18:17:54.803795567 +0000
	I0910 18:24:34.643544   42658 command_runner.go:130] >  Birth: 2024-09-10 18:17:54.803795567 +0000
	I0910 18:24:34.643648   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:24:34.649056   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.649123   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:24:34.654495   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.654543   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:24:34.659805   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.659850   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:24:34.665025   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.665267   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:24:34.670595   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.670646   42658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:24:34.676386   42658 command_runner.go:130] > Certificate will not expire
	I0910 18:24:34.676459   42658 kubeadm.go:392] StartCluster: {Name:multinode-925076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-925076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:24:34.676572   42658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:24:34.676619   42658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:24:34.711462   42658 command_runner.go:130] > 7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b
	I0910 18:24:34.711484   42658 command_runner.go:130] > 267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733
	I0910 18:24:34.711493   42658 command_runner.go:130] > b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8
	I0910 18:24:34.711503   42658 command_runner.go:130] > 4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b
	I0910 18:24:34.711512   42658 command_runner.go:130] > 248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113
	I0910 18:24:34.711522   42658 command_runner.go:130] > 5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b
	I0910 18:24:34.711533   42658 command_runner.go:130] > 48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3
	I0910 18:24:34.711546   42658 command_runner.go:130] > e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246
	I0910 18:24:34.711573   42658 cri.go:89] found id: "7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b"
	I0910 18:24:34.711585   42658 cri.go:89] found id: "267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733"
	I0910 18:24:34.711590   42658 cri.go:89] found id: "b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8"
	I0910 18:24:34.711598   42658 cri.go:89] found id: "4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b"
	I0910 18:24:34.711603   42658 cri.go:89] found id: "248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113"
	I0910 18:24:34.711610   42658 cri.go:89] found id: "5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b"
	I0910 18:24:34.711614   42658 cri.go:89] found id: "48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3"
	I0910 18:24:34.711617   42658 cri.go:89] found id: "e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246"
	I0910 18:24:34.711619   42658 cri.go:89] found id: ""
	I0910 18:24:34.711656   42658 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.808322125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992923808293404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b51aeb9c-d20a-4f54-9c48-f26220e80b41 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.809905824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e935e34d-bfe5-4c3e-b986-2c032fc76214 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.809966969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e935e34d-bfe5-4c3e-b986-2c032fc76214 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.810415924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a12f6a1f0c5a4e81403bc41c67a11ab96b43778e7184080cf02e7ba163e063c,PodSandboxId:e074512c790dca6c96654d28cb0bbfd406f66fe8c7216e203d238708c7306a50,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725992716258951136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14,PodSandboxId:c860108c71b51ef2f506e83497b469d883cb52e402792cc24ae609305de0d131,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725992682870687072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19,PodSandboxId:9621e71c7ef0b52b444749f3f91d6f4dc685162fe532473697646e84a303e8ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725992682709148360,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339
d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b457507d0da41ddc5a8e0de89599c9d69bb5914f1d111fafaee11725308027,PodSandboxId:5d9c945f08338d367db919f2c522efa4d25542fb2dabc9e2fee91d73b682f230,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725992682620955813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab,PodSandboxId:7682c5501f3c75cdb823447d1c7796fd87a095d7505731e37bffbaef16dcea90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725992682529540534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44,PodSandboxId:5031d687fbe639f03d115771c219a53896ffd9d6c0d7c484dd5dfdf69fdc20a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725992677201116952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f,PodSandboxId:824c9578b1825f70f636251cb32bcad2f3492251600131d8443ab25e42acb3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725992677170605009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e65146d574eda52f42b,},Annotations:map[string]string{io.kube
rnetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651,PodSandboxId:f20d6e21051392dcbef0f036469b4781da05bdbd7682dc10c9788386b99349ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725992677103413113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9,PodSandboxId:20a90a3fa8dfed8490e7f98060a0224e87b9a24a5a685b84f32dbd05d4bfea61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725992677040727967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5322d1fb1585e4fa1c8971d009fbd9e36a75e5d96c52a077ed858c5aba3f6,PodSandboxId:be7348f29e3a9f25bab84aea9e87da0b7a7c29397c3a066b0779fc5dd28e8d03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725992357209488442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b,PodSandboxId:b6c9ffea7d3910bb271189f7e25fbf13940ba77f177ed56a23a34e71a450243d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725992303859804144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733,PodSandboxId:26fe6d01b14dfd0fd19b712cb4a66352d5c855cfa59ab7cd1d8be99bf578121b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725992302943489138,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8,PodSandboxId:410f02f2b92394d7dda8bb2e446e84352b4226819e677b19e6419d2987de3280,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725992291394474064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b,PodSandboxId:89879f1f08d805e5cc92a9c935f524912704ed7efb6df9e3882fe4a02bbccc45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725992289112428695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113,PodSandboxId:fe4bd7a4f5685d3dbd61f44efa67ded6128a43cae3cf82a63309282654e49824,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725992278504516985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3,PodSandboxId:ae804a52d8bc7ec4957e30a8ec60cea6420472750c0be91a027e84f72b04cfc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725992278440959946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e651
46d574eda52f42b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b,PodSandboxId:9321f7e4ce4d46fc1090899c779bcb495279a02d61a1bd45b2a4f7e9a62ff419,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725992278475986554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246,PodSandboxId:2ff3feb0b23e17fe12c300ef2d520bd085ac4ad913eadaaed21e8ea83c345735,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725992278430880301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e935e34d-bfe5-4c3e-b986-2c032fc76214 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.854648750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f5cd585-f05b-4f72-b814-e00188f4e38f name=/runtime.v1.RuntimeService/Version
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.854745102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f5cd585-f05b-4f72-b814-e00188f4e38f name=/runtime.v1.RuntimeService/Version
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.855898415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=983087d6-dfdb-46a4-b57d-00afff69cf60 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.856331424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992923856311645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=983087d6-dfdb-46a4-b57d-00afff69cf60 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.856958942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59d7e24d-fcbf-42a3-9b60-5cc21f5a72b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.857031476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59d7e24d-fcbf-42a3-9b60-5cc21f5a72b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.857370288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a12f6a1f0c5a4e81403bc41c67a11ab96b43778e7184080cf02e7ba163e063c,PodSandboxId:e074512c790dca6c96654d28cb0bbfd406f66fe8c7216e203d238708c7306a50,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725992716258951136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14,PodSandboxId:c860108c71b51ef2f506e83497b469d883cb52e402792cc24ae609305de0d131,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725992682870687072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19,PodSandboxId:9621e71c7ef0b52b444749f3f91d6f4dc685162fe532473697646e84a303e8ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725992682709148360,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339
d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b457507d0da41ddc5a8e0de89599c9d69bb5914f1d111fafaee11725308027,PodSandboxId:5d9c945f08338d367db919f2c522efa4d25542fb2dabc9e2fee91d73b682f230,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725992682620955813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab,PodSandboxId:7682c5501f3c75cdb823447d1c7796fd87a095d7505731e37bffbaef16dcea90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725992682529540534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44,PodSandboxId:5031d687fbe639f03d115771c219a53896ffd9d6c0d7c484dd5dfdf69fdc20a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725992677201116952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f,PodSandboxId:824c9578b1825f70f636251cb32bcad2f3492251600131d8443ab25e42acb3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725992677170605009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e65146d574eda52f42b,},Annotations:map[string]string{io.kube
rnetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651,PodSandboxId:f20d6e21051392dcbef0f036469b4781da05bdbd7682dc10c9788386b99349ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725992677103413113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9,PodSandboxId:20a90a3fa8dfed8490e7f98060a0224e87b9a24a5a685b84f32dbd05d4bfea61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725992677040727967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5322d1fb1585e4fa1c8971d009fbd9e36a75e5d96c52a077ed858c5aba3f6,PodSandboxId:be7348f29e3a9f25bab84aea9e87da0b7a7c29397c3a066b0779fc5dd28e8d03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725992357209488442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b,PodSandboxId:b6c9ffea7d3910bb271189f7e25fbf13940ba77f177ed56a23a34e71a450243d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725992303859804144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733,PodSandboxId:26fe6d01b14dfd0fd19b712cb4a66352d5c855cfa59ab7cd1d8be99bf578121b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725992302943489138,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8,PodSandboxId:410f02f2b92394d7dda8bb2e446e84352b4226819e677b19e6419d2987de3280,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725992291394474064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b,PodSandboxId:89879f1f08d805e5cc92a9c935f524912704ed7efb6df9e3882fe4a02bbccc45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725992289112428695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113,PodSandboxId:fe4bd7a4f5685d3dbd61f44efa67ded6128a43cae3cf82a63309282654e49824,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725992278504516985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3,PodSandboxId:ae804a52d8bc7ec4957e30a8ec60cea6420472750c0be91a027e84f72b04cfc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725992278440959946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e651
46d574eda52f42b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b,PodSandboxId:9321f7e4ce4d46fc1090899c779bcb495279a02d61a1bd45b2a4f7e9a62ff419,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725992278475986554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246,PodSandboxId:2ff3feb0b23e17fe12c300ef2d520bd085ac4ad913eadaaed21e8ea83c345735,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725992278430880301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59d7e24d-fcbf-42a3-9b60-5cc21f5a72b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.903360157Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=739f0c8f-4016-4bb0-9601-079dc3782637 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.903451310Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=739f0c8f-4016-4bb0-9601-079dc3782637 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.904437605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6946134b-4e6d-43d5-ad63-8dba65734699 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.904942624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992923904918319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6946134b-4e6d-43d5-ad63-8dba65734699 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.905423603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ab00f43-a456-4967-9ea4-d392f53127ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.905497253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ab00f43-a456-4967-9ea4-d392f53127ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.906384551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a12f6a1f0c5a4e81403bc41c67a11ab96b43778e7184080cf02e7ba163e063c,PodSandboxId:e074512c790dca6c96654d28cb0bbfd406f66fe8c7216e203d238708c7306a50,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725992716258951136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14,PodSandboxId:c860108c71b51ef2f506e83497b469d883cb52e402792cc24ae609305de0d131,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725992682870687072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19,PodSandboxId:9621e71c7ef0b52b444749f3f91d6f4dc685162fe532473697646e84a303e8ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725992682709148360,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339
d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b457507d0da41ddc5a8e0de89599c9d69bb5914f1d111fafaee11725308027,PodSandboxId:5d9c945f08338d367db919f2c522efa4d25542fb2dabc9e2fee91d73b682f230,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725992682620955813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab,PodSandboxId:7682c5501f3c75cdb823447d1c7796fd87a095d7505731e37bffbaef16dcea90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725992682529540534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44,PodSandboxId:5031d687fbe639f03d115771c219a53896ffd9d6c0d7c484dd5dfdf69fdc20a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725992677201116952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f,PodSandboxId:824c9578b1825f70f636251cb32bcad2f3492251600131d8443ab25e42acb3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725992677170605009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e65146d574eda52f42b,},Annotations:map[string]string{io.kube
rnetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651,PodSandboxId:f20d6e21051392dcbef0f036469b4781da05bdbd7682dc10c9788386b99349ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725992677103413113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9,PodSandboxId:20a90a3fa8dfed8490e7f98060a0224e87b9a24a5a685b84f32dbd05d4bfea61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725992677040727967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5322d1fb1585e4fa1c8971d009fbd9e36a75e5d96c52a077ed858c5aba3f6,PodSandboxId:be7348f29e3a9f25bab84aea9e87da0b7a7c29397c3a066b0779fc5dd28e8d03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725992357209488442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b,PodSandboxId:b6c9ffea7d3910bb271189f7e25fbf13940ba77f177ed56a23a34e71a450243d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725992303859804144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733,PodSandboxId:26fe6d01b14dfd0fd19b712cb4a66352d5c855cfa59ab7cd1d8be99bf578121b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725992302943489138,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8,PodSandboxId:410f02f2b92394d7dda8bb2e446e84352b4226819e677b19e6419d2987de3280,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725992291394474064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b,PodSandboxId:89879f1f08d805e5cc92a9c935f524912704ed7efb6df9e3882fe4a02bbccc45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725992289112428695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113,PodSandboxId:fe4bd7a4f5685d3dbd61f44efa67ded6128a43cae3cf82a63309282654e49824,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725992278504516985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3,PodSandboxId:ae804a52d8bc7ec4957e30a8ec60cea6420472750c0be91a027e84f72b04cfc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725992278440959946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e651
46d574eda52f42b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b,PodSandboxId:9321f7e4ce4d46fc1090899c779bcb495279a02d61a1bd45b2a4f7e9a62ff419,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725992278475986554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246,PodSandboxId:2ff3feb0b23e17fe12c300ef2d520bd085ac4ad913eadaaed21e8ea83c345735,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725992278430880301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ab00f43-a456-4967-9ea4-d392f53127ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.949269313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cab8fd89-dac6-49d3-848c-e3c8c42d3744 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.949340865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cab8fd89-dac6-49d3-848c-e3c8c42d3744 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.957747641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e87eda5-9594-4cd9-bbea-307658223879 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.958365079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992923958340321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e87eda5-9594-4cd9-bbea-307658223879 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.958952571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fccae344-1d23-431c-b9a9-50917e28a351 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.959006564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fccae344-1d23-431c-b9a9-50917e28a351 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:28:43 multinode-925076 crio[2744]: time="2024-09-10 18:28:43.959361569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a12f6a1f0c5a4e81403bc41c67a11ab96b43778e7184080cf02e7ba163e063c,PodSandboxId:e074512c790dca6c96654d28cb0bbfd406f66fe8c7216e203d238708c7306a50,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725992716258951136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14,PodSandboxId:c860108c71b51ef2f506e83497b469d883cb52e402792cc24ae609305de0d131,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725992682870687072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19,PodSandboxId:9621e71c7ef0b52b444749f3f91d6f4dc685162fe532473697646e84a303e8ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725992682709148360,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339
d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b457507d0da41ddc5a8e0de89599c9d69bb5914f1d111fafaee11725308027,PodSandboxId:5d9c945f08338d367db919f2c522efa4d25542fb2dabc9e2fee91d73b682f230,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725992682620955813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab,PodSandboxId:7682c5501f3c75cdb823447d1c7796fd87a095d7505731e37bffbaef16dcea90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725992682529540534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44,PodSandboxId:5031d687fbe639f03d115771c219a53896ffd9d6c0d7c484dd5dfdf69fdc20a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725992677201116952,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f,PodSandboxId:824c9578b1825f70f636251cb32bcad2f3492251600131d8443ab25e42acb3f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725992677170605009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e65146d574eda52f42b,},Annotations:map[string]string{io.kube
rnetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651,PodSandboxId:f20d6e21051392dcbef0f036469b4781da05bdbd7682dc10c9788386b99349ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725992677103413113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9,PodSandboxId:20a90a3fa8dfed8490e7f98060a0224e87b9a24a5a685b84f32dbd05d4bfea61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725992677040727967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5322d1fb1585e4fa1c8971d009fbd9e36a75e5d96c52a077ed858c5aba3f6,PodSandboxId:be7348f29e3a9f25bab84aea9e87da0b7a7c29397c3a066b0779fc5dd28e8d03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725992357209488442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gbtc6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 601a1920-7ada-4e22-bc76-a7168d0a0bf4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b,PodSandboxId:b6c9ffea7d3910bb271189f7e25fbf13940ba77f177ed56a23a34e71a450243d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725992303859804144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4dglr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babfa2bd-c4b5-4374-a486-8336d9be50b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267ae04b613e33275055e3ec9dc1ddc72631b493da0a0d7269a9615aac2f9733,PodSandboxId:26fe6d01b14dfd0fd19b712cb4a66352d5c855cfa59ab7cd1d8be99bf578121b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725992302943489138,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66f13cfc-718f-4b14-bf4b-d2ee27044660,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8,PodSandboxId:410f02f2b92394d7dda8bb2e446e84352b4226819e677b19e6419d2987de3280,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725992291394474064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2n7r,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 19158312-8a4b-4f63-8745-22339d7878c2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b,PodSandboxId:89879f1f08d805e5cc92a9c935f524912704ed7efb6df9e3882fe4a02bbccc45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725992289112428695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j26sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 82133fae-7613-40c1-bf5e-1442806d5b4c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113,PodSandboxId:fe4bd7a4f5685d3dbd61f44efa67ded6128a43cae3cf82a63309282654e49824,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725992278504516985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
20b7386bbb0fcd5c5efd9a16a10f1a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3,PodSandboxId:ae804a52d8bc7ec4957e30a8ec60cea6420472750c0be91a027e84f72b04cfc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725992278440959946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b16718a9933e651
46d574eda52f42b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b,PodSandboxId:9321f7e4ce4d46fc1090899c779bcb495279a02d61a1bd45b2a4f7e9a62ff419,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725992278475986554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0276b6da07265e518018db7a0ad97828,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246,PodSandboxId:2ff3feb0b23e17fe12c300ef2d520bd085ac4ad913eadaaed21e8ea83c345735,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725992278430880301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-925076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe5727e8261b45d281a09ec0b883902d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fccae344-1d23-431c-b9a9-50917e28a351 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0a12f6a1f0c5a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   e074512c790dc       busybox-7dff88458-gbtc6
	2f3aea89b49de       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   c860108c71b51       coredns-6f6b679f8f-4dglr
	1f7c6eb1d280f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   9621e71c7ef0b       kindnet-d2n7r
	27b457507d0da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   5d9c945f08338       storage-provisioner
	39bfc244fa885       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   7682c5501f3c7       kube-proxy-j26sr
	da5c9818ec212       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   5031d687fbe63       kube-controller-manager-multinode-925076
	8e57778740f10       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   824c9578b1825       kube-scheduler-multinode-925076
	51637becea86d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   f20d6e2105139       etcd-multinode-925076
	8c33747b9a7e3       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   20a90a3fa8dfe       kube-apiserver-multinode-925076
	f2c5322d1fb15       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   be7348f29e3a9       busybox-7dff88458-gbtc6
	7e28c3bf386c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   b6c9ffea7d391       coredns-6f6b679f8f-4dglr
	267ae04b613e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   26fe6d01b14df       storage-provisioner
	b4b03eebef957       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   410f02f2b9239       kindnet-d2n7r
	4648cdf59f3f3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   89879f1f08d80       kube-proxy-j26sr
	248fbf0cae534       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   fe4bd7a4f5685       kube-apiserver-multinode-925076
	5e4c3672b3e4d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   9321f7e4ce4d4       etcd-multinode-925076
	48859d1709a7b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   ae804a52d8bc7       kube-scheduler-multinode-925076
	e6c580dc81be2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   2ff3feb0b23e1       kube-controller-manager-multinode-925076
	
	
	==> coredns [2f3aea89b49de5b59b0c2167610fb0c3f974618c21095f7fd04bce8f13e28b14] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36541 - 20701 "HINFO IN 3910805571411210170.1997818468255851267. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012175401s
	
	
	==> coredns [7e28c3bf386c7f6b3456cbd5ffbc27426deb230f9c35ed5d09da9958cef0687b] <==
	[INFO] 10.244.1.2:58085 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001577877s
	[INFO] 10.244.1.2:48948 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077666s
	[INFO] 10.244.1.2:44256 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062484s
	[INFO] 10.244.1.2:35064 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001021504s
	[INFO] 10.244.1.2:56743 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007797s
	[INFO] 10.244.1.2:35138 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085117s
	[INFO] 10.244.1.2:40977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079006s
	[INFO] 10.244.0.3:54692 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009979s
	[INFO] 10.244.0.3:40426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009554s
	[INFO] 10.244.0.3:51805 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067476s
	[INFO] 10.244.0.3:42333 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069781s
	[INFO] 10.244.1.2:54055 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106117s
	[INFO] 10.244.1.2:44588 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106247s
	[INFO] 10.244.1.2:59910 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078344s
	[INFO] 10.244.1.2:41523 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072519s
	[INFO] 10.244.0.3:39614 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118216s
	[INFO] 10.244.0.3:47589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110996s
	[INFO] 10.244.0.3:49606 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078004s
	[INFO] 10.244.0.3:49841 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092246s
	[INFO] 10.244.1.2:42558 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183375s
	[INFO] 10.244.1.2:53210 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101906s
	[INFO] 10.244.1.2:52654 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079637s
	[INFO] 10.244.1.2:46369 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068024s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-925076
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-925076
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-925076
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_18_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:18:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-925076
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:28:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:24:40 +0000   Tue, 10 Sep 2024 18:17:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:24:40 +0000   Tue, 10 Sep 2024 18:17:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:24:40 +0000   Tue, 10 Sep 2024 18:17:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:24:40 +0000   Tue, 10 Sep 2024 18:18:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    multinode-925076
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c5fbae42c9740639faedcc3dd37cd0c
	  System UUID:                6c5fbae4-2c97-4063-9fae-dcc3dd37cd0c
	  Boot ID:                    13243f56-bc41-4383-9f8c-f52b33ae4478
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gbtc6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	  kube-system                 coredns-6f6b679f8f-4dglr                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-925076                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-d2n7r                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-925076             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-925076    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-j26sr                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-925076             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-925076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-925076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-925076 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-925076 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-925076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-925076 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-925076 event: Registered Node multinode-925076 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-925076 status is now: NodeReady
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-925076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-925076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-925076 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node multinode-925076 event: Registered Node multinode-925076 in Controller
	
	
	Name:               multinode-925076-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-925076-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=multinode-925076
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_10T18_25_22_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:25:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-925076-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:26:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 10 Sep 2024 18:25:52 +0000   Tue, 10 Sep 2024 18:27:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 10 Sep 2024 18:25:52 +0000   Tue, 10 Sep 2024 18:27:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 10 Sep 2024 18:25:52 +0000   Tue, 10 Sep 2024 18:27:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 10 Sep 2024 18:25:52 +0000   Tue, 10 Sep 2024 18:27:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    multinode-925076-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa7366426b6546c29cbf192a51fa99e6
	  System UUID:                aa736642-6b65-46c2-9cbf-192a51fa99e6
	  Boot ID:                    b24c0be9-954a-49a4-ae50-c386116638b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-59xdp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 kindnet-hwts7              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m50s
	  kube-system                 kube-proxy-vpg55           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m17s                  kube-proxy       
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m50s (x2 over 9m51s)  kubelet          Node multinode-925076-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m50s (x2 over 9m51s)  kubelet          Node multinode-925076-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m50s (x2 over 9m51s)  kubelet          Node multinode-925076-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m32s                  kubelet          Node multinode-925076-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-925076-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-925076-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-925076-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-925076-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-925076-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.054862] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.187431] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.126014] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.284016] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.883090] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.029593] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.075891] kauditd_printk_skb: 158 callbacks suppressed
	[Sep10 18:18] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.089532] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.636176] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.152716] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[ +13.703163] kauditd_printk_skb: 60 callbacks suppressed
	[Sep10 18:19] kauditd_printk_skb: 14 callbacks suppressed
	[Sep10 18:24] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.143662] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.168714] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.132316] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.272309] systemd-fstab-generator[2734]: Ignoring "noauto" option for root device
	[  +5.318092] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.079860] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.960659] systemd-fstab-generator[2950]: Ignoring "noauto" option for root device
	[  +6.201056] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.935796] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +0.100573] kauditd_printk_skb: 36 callbacks suppressed
	[Sep10 18:25] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [51637becea86dae763bacad207a60122b90103f03c5723dd3a03fafe64303651] <==
	{"level":"info","ts":"2024-09-10T18:24:37.525948Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ffc6a57a6de49e73","local-member-id":"1aa4f7d85b49255a","added-peer-id":"1aa4f7d85b49255a","added-peer-peer-urls":["https://192.168.39.248:2380"]}
	{"level":"info","ts":"2024-09-10T18:24:37.526099Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ffc6a57a6de49e73","local-member-id":"1aa4f7d85b49255a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:24:37.526149Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:24:37.552125Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:24:37.555383Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T18:24:37.561217Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1aa4f7d85b49255a","initial-advertise-peer-urls":["https://192.168.39.248:2380"],"listen-peer-urls":["https://192.168.39.248:2380"],"advertise-client-urls":["https://192.168.39.248:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.248:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T18:24:37.561356Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T18:24:37.561753Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-09-10T18:24:37.565909Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-09-10T18:24:39.081905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-10T18:24:39.081951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T18:24:39.081987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a received MsgPreVoteResp from 1aa4f7d85b49255a at term 2"}
	{"level":"info","ts":"2024-09-10T18:24:39.081999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T18:24:39.082005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a received MsgVoteResp from 1aa4f7d85b49255a at term 3"}
	{"level":"info","ts":"2024-09-10T18:24:39.082014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1aa4f7d85b49255a became leader at term 3"}
	{"level":"info","ts":"2024-09-10T18:24:39.082021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1aa4f7d85b49255a elected leader 1aa4f7d85b49255a at term 3"}
	{"level":"info","ts":"2024-09-10T18:24:39.087775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:24:39.088797Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:24:39.087737Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1aa4f7d85b49255a","local-member-attributes":"{Name:multinode-925076 ClientURLs:[https://192.168.39.248:2379]}","request-path":"/0/members/1aa4f7d85b49255a/attributes","cluster-id":"ffc6a57a6de49e73","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:24:39.089176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:24:39.089455Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:24:39.089493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:24:39.089779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.248:2379"}
	{"level":"info","ts":"2024-09-10T18:24:39.090234Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:24:39.091204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [5e4c3672b3e4dee78dd9e5a9b65197e3873c4a9c53965dcb7400c183ab876a1b] <==
	{"level":"info","ts":"2024-09-10T18:17:59.011083Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1aa4f7d85b49255a","local-member-attributes":"{Name:multinode-925076 ClientURLs:[https://192.168.39.248:2379]}","request-path":"/0/members/1aa4f7d85b49255a/attributes","cluster-id":"ffc6a57a6de49e73","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:17:59.011271Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:17:59.011574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:17:59.011916Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:17:59.016879Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:17:59.016951Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:17:59.018722Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:17:59.019577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.248:2379"}
	{"level":"info","ts":"2024-09-10T18:17:59.014651Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:17:59.030624Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T18:17:59.031960Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ffc6a57a6de49e73","local-member-id":"1aa4f7d85b49255a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:17:59.069727Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:17:59.112150Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:18:53.884265Z","caller":"traceutil/trace.go:171","msg":"trace[1652183131] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"102.802549ms","start":"2024-09-10T18:18:53.781437Z","end":"2024-09-10T18:18:53.884239Z","steps":["trace[1652183131] 'process raft request'  (duration: 88.674128ms)","trace[1652183131] 'compare'  (duration: 14.049045ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T18:20:38.027101Z","caller":"traceutil/trace.go:171","msg":"trace[1798120250] transaction","detail":"{read_only:false; response_revision:738; number_of_response:1; }","duration":"139.268402ms","start":"2024-09-10T18:20:37.887720Z","end":"2024-09-10T18:20:38.026988Z","steps":["trace[1798120250] 'process raft request'  (duration: 138.059865ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T18:22:57.002394Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-10T18:22:57.002544Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-925076","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.248:2380"],"advertise-client-urls":["https://192.168.39.248:2379"]}
	{"level":"warn","ts":"2024-09-10T18:22:57.002715Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T18:22:57.002814Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T18:22:57.081545Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.248:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T18:22:57.082005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.248:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-10T18:22:57.083242Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1aa4f7d85b49255a","current-leader-member-id":"1aa4f7d85b49255a"}
	{"level":"info","ts":"2024-09-10T18:22:57.085631Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-09-10T18:22:57.085809Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.248:2380"}
	{"level":"info","ts":"2024-09-10T18:22:57.085908Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-925076","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.248:2380"],"advertise-client-urls":["https://192.168.39.248:2379"]}
	
	
	==> kernel <==
	 18:28:44 up 11 min,  0 users,  load average: 0.15, 0.17, 0.11
	Linux multinode-925076 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1f7c6eb1d280fdcc0f773caf8ba34f30726aebc808785c927ea8aeeaf3882c19] <==
	I0910 18:27:43.883347       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:27:53.887782       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:27:53.887879       1 main.go:299] handling current node
	I0910 18:27:53.887898       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:27:53.887904       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:28:03.889602       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:28:03.889653       1 main.go:299] handling current node
	I0910 18:28:03.889671       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:28:03.889677       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:28:13.890352       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:28:13.890494       1 main.go:299] handling current node
	I0910 18:28:13.890535       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:28:13.890557       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:28:23.891553       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:28:23.891699       1 main.go:299] handling current node
	I0910 18:28:23.891730       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:28:23.891749       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:28:33.892705       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:28:33.892756       1 main.go:299] handling current node
	I0910 18:28:33.892809       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:28:33.892816       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:28:43.883041       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:28:43.883085       1 main.go:299] handling current node
	I0910 18:28:43.883102       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:28:43.883107       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [b4b03eebef957440150db22a8e23ac078472e28e2b05760b02db504a181aa0c8] <==
	I0910 18:22:12.380813       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:22:22.386954       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:22:22.387009       1 main.go:299] handling current node
	I0910 18:22:22.387023       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:22:22.387028       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:22:22.387186       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:22:22.387210       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:22:32.390126       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:22:32.390257       1 main.go:299] handling current node
	I0910 18:22:32.390285       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:22:32.390302       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:22:32.390465       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:22:32.390489       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:22:42.380708       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:22:42.380926       1 main.go:299] handling current node
	I0910 18:22:42.380964       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:22:42.380985       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:22:42.381140       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:22:42.381162       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	I0910 18:22:52.389527       1 main.go:295] Handling node with IPs: map[192.168.39.248:{}]
	I0910 18:22:52.389578       1 main.go:299] handling current node
	I0910 18:22:52.389602       1 main.go:295] Handling node with IPs: map[192.168.39.31:{}]
	I0910 18:22:52.389622       1 main.go:322] Node multinode-925076-m02 has CIDR [10.244.1.0/24] 
	I0910 18:22:52.389770       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0910 18:22:52.389775       1 main.go:322] Node multinode-925076-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [248fbf0cae5341cdebf6cfbd758a48b77e48379df069ffdfff8aa8149f264113] <==
	W0910 18:18:02.663011       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.248]
	I0910 18:18:02.664185       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:18:02.669046       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 18:18:03.024225       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:18:03.638766       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:18:03.655045       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0910 18:18:03.666989       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:18:08.574811       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0910 18:18:08.675616       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0910 18:19:18.340346       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56496: use of closed network connection
	E0910 18:19:18.503707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56510: use of closed network connection
	E0910 18:19:18.680385       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56518: use of closed network connection
	E0910 18:19:18.849007       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56528: use of closed network connection
	E0910 18:19:19.010593       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56546: use of closed network connection
	E0910 18:19:19.181646       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56558: use of closed network connection
	E0910 18:19:19.447171       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56598: use of closed network connection
	E0910 18:19:19.613066       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:56616: use of closed network connection
	E0910 18:19:19.770561       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:43780: use of closed network connection
	E0910 18:19:19.928666       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:43794: use of closed network connection
	I0910 18:22:57.005763       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0910 18:22:57.017975       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:22:57.018307       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:22:57.019265       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:22:57.019329       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:22:57.019357       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8c33747b9a7e31c35e1d58b859edfb6f0a2b19386be2ac4476640baaf59ca5e9] <==
	I0910 18:24:40.493863       1 aggregator.go:171] initial CRD sync complete...
	I0910 18:24:40.493880       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 18:24:40.493885       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 18:24:40.493890       1 cache.go:39] Caches are synced for autoregister controller
	I0910 18:24:40.494436       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0910 18:24:40.501047       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:24:40.501083       1 policy_source.go:224] refreshing policies
	I0910 18:24:40.541390       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0910 18:24:40.541429       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 18:24:40.541810       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 18:24:40.542320       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 18:24:40.544511       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 18:24:40.544615       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 18:24:40.544950       1 shared_informer.go:320] Caches are synced for configmaps
	I0910 18:24:40.547354       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 18:24:40.555765       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0910 18:24:40.567750       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0910 18:24:41.355166       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0910 18:24:42.518545       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:24:43.073592       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:24:43.134250       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:24:43.279199       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 18:24:43.300944       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0910 18:24:43.991159       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:24:44.188486       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [da5c9818ec2122dec6bcf3bd808ac03ad3ecf058f5377b5547d1fafc2773dd44] <==
	I0910 18:25:59.594398       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:25:59.614590       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-925076-m03" podCIDRs=["10.244.2.0/24"]
	I0910 18:25:59.614632       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:25:59.614657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:00.044790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:00.422611       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:03.985220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:09.872053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:17.323727       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:26:17.324088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:17.334343       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:18.898073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:22.059332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:22.074734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:22.629794       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:26:22.630131       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:27:03.916530       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:27:03.937592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:27:03.943944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.937169ms"
	I0910 18:27:03.944052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.758µs"
	I0910 18:27:08.999128       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:27:23.762050       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lsjg7"
	I0910 18:27:23.791476       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lsjg7"
	I0910 18:27:23.792179       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rnchg"
	I0910 18:27:23.815761       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rnchg"
	
	
	==> kube-controller-manager [e6c580dc81be234ffff0295376e4516b395bbdda306d809df83963ee02f0b246] <==
	I0910 18:20:32.457131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:32.679753       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:32.681262       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:20:33.789003       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-925076-m03\" does not exist"
	I0910 18:20:33.789132       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:20:33.807543       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-925076-m03" podCIDRs=["10.244.3.0/24"]
	I0910 18:20:33.808105       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:33.808268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:34.213125       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:34.580528       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:38.029189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:43.907489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:51.539385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:51.539442       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m02"
	I0910 18:20:51.547896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:20:52.834394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:21:27.852145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:21:27.852533       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-925076-m03"
	I0910 18:21:27.881189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:21:27.945073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.040817ms"
	I0910 18:21:27.945369       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.835µs"
	I0910 18:21:32.986568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m02"
	I0910 18:21:37.933504       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:21:37.956680       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	I0910 18:21:43.058206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-925076-m03"
	
	
	==> kube-proxy [39bfc244fa8856bcc84943c722d7e61d6e8dcc5ddd69ad8c17c5c8ae5ded80ab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:24:43.077963       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:24:43.101732       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.248"]
	E0910 18:24:43.101876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:24:43.195217       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:24:43.195300       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:24:43.195333       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:24:43.201115       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:24:43.201465       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:24:43.201496       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:24:43.205950       1 config.go:197] "Starting service config controller"
	I0910 18:24:43.206000       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:24:43.206062       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:24:43.206084       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:24:43.206592       1 config.go:326] "Starting node config controller"
	I0910 18:24:43.206628       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:24:43.307117       1 shared_informer.go:320] Caches are synced for node config
	I0910 18:24:43.307203       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:24:43.307227       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [4648cdf59f3f30d9cbbe5265394c226367f048450e2b327c1a7b81cc816a995b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:18:09.496560       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:18:09.513982       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.248"]
	E0910 18:18:09.514917       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:18:09.599791       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:18:09.599903       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:18:09.599929       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:18:09.610330       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:18:09.610572       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:18:09.610583       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:18:09.612269       1 config.go:197] "Starting service config controller"
	I0910 18:18:09.612278       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:18:09.612314       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:18:09.612318       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:18:09.612661       1 config.go:326] "Starting node config controller"
	I0910 18:18:09.612667       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:18:09.712633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:18:09.712678       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:18:09.712908       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [48859d1709a7bc6847927b022780370cde454c0419e16262db7bbaf31a878cd3] <==
	E0910 18:18:01.045406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:01.045499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 18:18:01.045543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:01.045623       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 18:18:01.045662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:01.993502       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 18:18:01.993643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.032301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 18:18:02.032350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.042769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 18:18:02.042924       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.069385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 18:18:02.070596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.126704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 18:18:02.126812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.141544       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 18:18:02.141715       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 18:18:02.176216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 18:18:02.176268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.176877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 18:18:02.176954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 18:18:02.188456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 18:18:02.188503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0910 18:18:04.939221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0910 18:22:57.012708       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8e57778740f1076867b7107ddec5bd8359baca4f6dc51d6e47a690a3a3263d7f] <==
	I0910 18:24:38.286733       1 serving.go:386] Generated self-signed cert in-memory
	W0910 18:24:40.376020       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 18:24:40.376123       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 18:24:40.376185       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 18:24:40.376197       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 18:24:40.474461       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 18:24:40.474520       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:24:40.480563       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 18:24:40.480747       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 18:24:40.480803       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 18:24:40.487604       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 18:24:40.581451       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 18:27:26 multinode-925076 kubelet[2957]: E0910 18:27:26.561237    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992846560799997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:27:36 multinode-925076 kubelet[2957]: E0910 18:27:36.465484    2957 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:27:36 multinode-925076 kubelet[2957]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:27:36 multinode-925076 kubelet[2957]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:27:36 multinode-925076 kubelet[2957]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:27:36 multinode-925076 kubelet[2957]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:27:36 multinode-925076 kubelet[2957]: E0910 18:27:36.564302    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992856562966070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:27:36 multinode-925076 kubelet[2957]: E0910 18:27:36.564408    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992856562966070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:27:46 multinode-925076 kubelet[2957]: E0910 18:27:46.567049    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992866566399789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:27:46 multinode-925076 kubelet[2957]: E0910 18:27:46.567421    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992866566399789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:27:56 multinode-925076 kubelet[2957]: E0910 18:27:56.572605    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992876570160776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:27:56 multinode-925076 kubelet[2957]: E0910 18:27:56.572779    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992876570160776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:28:06 multinode-925076 kubelet[2957]: E0910 18:28:06.573776    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992886573526879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:28:06 multinode-925076 kubelet[2957]: E0910 18:28:06.574642    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992886573526879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:28:16 multinode-925076 kubelet[2957]: E0910 18:28:16.578794    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992896577374538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:28:16 multinode-925076 kubelet[2957]: E0910 18:28:16.579176    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992896577374538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:28:26 multinode-925076 kubelet[2957]: E0910 18:28:26.581212    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992906580654409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:28:26 multinode-925076 kubelet[2957]: E0910 18:28:26.581536    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992906580654409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:28:36 multinode-925076 kubelet[2957]: E0910 18:28:36.465206    2957 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 18:28:36 multinode-925076 kubelet[2957]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 18:28:36 multinode-925076 kubelet[2957]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 18:28:36 multinode-925076 kubelet[2957]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 18:28:36 multinode-925076 kubelet[2957]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 18:28:36 multinode-925076 kubelet[2957]: E0910 18:28:36.586644    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992916586219266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:28:36 multinode-925076 kubelet[2957]: E0910 18:28:36.586665    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725992916586219266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:28:43.555008   44563 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19598-5973/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-925076 -n multinode-925076
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-925076 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                    
x
+
TestPreload (270.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-845023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0910 18:33:56.538919   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-845023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m8.25750505s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-845023 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-845023 image pull gcr.io/k8s-minikube/busybox: (2.071690981s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-845023
E0910 18:36:18.240632   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:36:35.174789   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-845023: exit status 82 (2m0.450605917s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-845023"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-845023 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-09-10 18:36:52.528977038 +0000 UTC m=+4072.352751797
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-845023 -n test-preload-845023
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-845023 -n test-preload-845023: exit status 3 (18.611497918s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:37:11.137394   47434 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.26:22: connect: no route to host
	E0910 18:37:11.137414   47434 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.26:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-845023" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-845023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-845023
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-845023: (1.134296547s)
--- FAIL: TestPreload (270.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (454.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-192799 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-192799 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m29.531816605s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-192799] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-192799" primary control-plane node in "kubernetes-upgrade-192799" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:39:06.201298   48523 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:39:06.201432   48523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:39:06.201442   48523 out.go:358] Setting ErrFile to fd 2...
	I0910 18:39:06.201448   48523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:39:06.201647   48523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:39:06.202877   48523 out.go:352] Setting JSON to false
	I0910 18:39:06.203951   48523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4898,"bootTime":1725988648,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:39:06.204007   48523 start.go:139] virtualization: kvm guest
	I0910 18:39:06.206285   48523 out.go:177] * [kubernetes-upgrade-192799] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:39:06.208143   48523 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:39:06.208162   48523 notify.go:220] Checking for updates...
	I0910 18:39:06.210370   48523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:39:06.211555   48523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:39:06.213489   48523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:39:06.215073   48523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:39:06.216612   48523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:39:06.217916   48523 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:39:06.253851   48523 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 18:39:06.255100   48523 start.go:297] selected driver: kvm2
	I0910 18:39:06.255128   48523 start.go:901] validating driver "kvm2" against <nil>
	I0910 18:39:06.255142   48523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:39:06.256135   48523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:39:06.256241   48523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:39:06.274476   48523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:39:06.274530   48523 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 18:39:06.274751   48523 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 18:39:06.274780   48523 cni.go:84] Creating CNI manager for ""
	I0910 18:39:06.274794   48523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:39:06.274807   48523 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 18:39:06.274871   48523 start.go:340] cluster config:
	{Name:kubernetes-upgrade-192799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-192799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:39:06.274990   48523 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:39:06.277566   48523 out.go:177] * Starting "kubernetes-upgrade-192799" primary control-plane node in "kubernetes-upgrade-192799" cluster
	I0910 18:39:06.278617   48523 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:39:06.278669   48523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:39:06.278677   48523 cache.go:56] Caching tarball of preloaded images
	I0910 18:39:06.278761   48523 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:39:06.278778   48523 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0910 18:39:06.279239   48523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/config.json ...
	I0910 18:39:06.279272   48523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/config.json: {Name:mk6afdad854870b8a3f5e958c5ff41a882f523ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:39:06.279464   48523 start.go:360] acquireMachinesLock for kubernetes-upgrade-192799: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:39:06.279518   48523 start.go:364] duration metric: took 27.435µs to acquireMachinesLock for "kubernetes-upgrade-192799"
	I0910 18:39:06.279541   48523 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-192799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-192799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 18:39:06.279625   48523 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 18:39:06.281100   48523 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 18:39:06.281318   48523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:39:06.281372   48523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:39:06.296529   48523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36327
	I0910 18:39:06.296953   48523 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:39:06.297532   48523 main.go:141] libmachine: Using API Version  1
	I0910 18:39:06.297558   48523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:39:06.297851   48523 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:39:06.298045   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetMachineName
	I0910 18:39:06.298189   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:39:06.298338   48523 start.go:159] libmachine.API.Create for "kubernetes-upgrade-192799" (driver="kvm2")
	I0910 18:39:06.298389   48523 client.go:168] LocalClient.Create starting
	I0910 18:39:06.298424   48523 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 18:39:06.298462   48523 main.go:141] libmachine: Decoding PEM data...
	I0910 18:39:06.298488   48523 main.go:141] libmachine: Parsing certificate...
	I0910 18:39:06.298559   48523 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 18:39:06.298585   48523 main.go:141] libmachine: Decoding PEM data...
	I0910 18:39:06.298599   48523 main.go:141] libmachine: Parsing certificate...
	I0910 18:39:06.298625   48523 main.go:141] libmachine: Running pre-create checks...
	I0910 18:39:06.298638   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .PreCreateCheck
	I0910 18:39:06.298948   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetConfigRaw
	I0910 18:39:06.299431   48523 main.go:141] libmachine: Creating machine...
	I0910 18:39:06.299452   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .Create
	I0910 18:39:06.299584   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Creating KVM machine...
	I0910 18:39:06.300810   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found existing default KVM network
	I0910 18:39:06.301483   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:06.301355   48588 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015aa0}
	I0910 18:39:06.301518   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | created network xml: 
	I0910 18:39:06.301531   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | <network>
	I0910 18:39:06.301542   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG |   <name>mk-kubernetes-upgrade-192799</name>
	I0910 18:39:06.301551   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG |   <dns enable='no'/>
	I0910 18:39:06.301560   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG |   
	I0910 18:39:06.301569   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0910 18:39:06.301590   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG |     <dhcp>
	I0910 18:39:06.301600   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0910 18:39:06.301614   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG |     </dhcp>
	I0910 18:39:06.301626   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG |   </ip>
	I0910 18:39:06.301654   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG |   
	I0910 18:39:06.301690   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | </network>
	I0910 18:39:06.301705   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | 
	I0910 18:39:06.307044   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | trying to create private KVM network mk-kubernetes-upgrade-192799 192.168.39.0/24...
	I0910 18:39:06.383246   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799 ...
	I0910 18:39:06.383278   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 18:39:06.383290   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | private KVM network mk-kubernetes-upgrade-192799 192.168.39.0/24 created
	I0910 18:39:06.383306   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:39:06.383327   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:06.382719   48588 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:39:06.654480   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:06.654354   48588 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa...
	I0910 18:39:06.864717   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:06.864607   48588 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/kubernetes-upgrade-192799.rawdisk...
	I0910 18:39:06.864746   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Writing magic tar header
	I0910 18:39:06.864765   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Writing SSH key tar header
	I0910 18:39:06.864788   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:06.864768   48588 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799 ...
	I0910 18:39:06.864902   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799
	I0910 18:39:06.864925   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799 (perms=drwx------)
	I0910 18:39:06.864936   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 18:39:06.864952   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:39:06.864966   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 18:39:06.864978   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 18:39:06.864987   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Checking permissions on dir: /home/jenkins
	I0910 18:39:06.865000   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Checking permissions on dir: /home
	I0910 18:39:06.865016   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Skipping /home - not owner
	I0910 18:39:06.865030   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 18:39:06.865063   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 18:39:06.865091   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 18:39:06.865102   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 18:39:06.865112   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 18:39:06.865124   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Creating domain...
	I0910 18:39:06.866148   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) define libvirt domain using xml: 
	I0910 18:39:06.866185   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) <domain type='kvm'>
	I0910 18:39:06.866201   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   <name>kubernetes-upgrade-192799</name>
	I0910 18:39:06.866210   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   <memory unit='MiB'>2200</memory>
	I0910 18:39:06.866222   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   <vcpu>2</vcpu>
	I0910 18:39:06.866233   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   <features>
	I0910 18:39:06.866243   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <acpi/>
	I0910 18:39:06.866253   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <apic/>
	I0910 18:39:06.866261   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <pae/>
	I0910 18:39:06.866277   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     
	I0910 18:39:06.866289   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   </features>
	I0910 18:39:06.866298   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   <cpu mode='host-passthrough'>
	I0910 18:39:06.866310   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   
	I0910 18:39:06.866317   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   </cpu>
	I0910 18:39:06.866326   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   <os>
	I0910 18:39:06.866337   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <type>hvm</type>
	I0910 18:39:06.866347   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <boot dev='cdrom'/>
	I0910 18:39:06.866362   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <boot dev='hd'/>
	I0910 18:39:06.866374   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <bootmenu enable='no'/>
	I0910 18:39:06.866384   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   </os>
	I0910 18:39:06.866392   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   <devices>
	I0910 18:39:06.866403   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <disk type='file' device='cdrom'>
	I0910 18:39:06.866420   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/boot2docker.iso'/>
	I0910 18:39:06.866442   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <target dev='hdc' bus='scsi'/>
	I0910 18:39:06.866456   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <readonly/>
	I0910 18:39:06.866468   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     </disk>
	I0910 18:39:06.866480   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <disk type='file' device='disk'>
	I0910 18:39:06.866491   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 18:39:06.866513   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/kubernetes-upgrade-192799.rawdisk'/>
	I0910 18:39:06.866554   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <target dev='hda' bus='virtio'/>
	I0910 18:39:06.866578   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     </disk>
	I0910 18:39:06.866597   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <interface type='network'>
	I0910 18:39:06.866615   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <source network='mk-kubernetes-upgrade-192799'/>
	I0910 18:39:06.866629   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <model type='virtio'/>
	I0910 18:39:06.866636   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     </interface>
	I0910 18:39:06.866645   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <interface type='network'>
	I0910 18:39:06.866657   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <source network='default'/>
	I0910 18:39:06.866666   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <model type='virtio'/>
	I0910 18:39:06.866678   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     </interface>
	I0910 18:39:06.866688   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <serial type='pty'>
	I0910 18:39:06.866698   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <target port='0'/>
	I0910 18:39:06.866709   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     </serial>
	I0910 18:39:06.866719   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <console type='pty'>
	I0910 18:39:06.866729   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <target type='serial' port='0'/>
	I0910 18:39:06.866739   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     </console>
	I0910 18:39:06.866752   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     <rng model='virtio'>
	I0910 18:39:06.866773   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)       <backend model='random'>/dev/random</backend>
	I0910 18:39:06.866792   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     </rng>
	I0910 18:39:06.866812   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     
	I0910 18:39:06.866821   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)     
	I0910 18:39:06.866828   48523 main.go:141] libmachine: (kubernetes-upgrade-192799)   </devices>
	I0910 18:39:06.866839   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) </domain>
	I0910 18:39:06.866849   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) 
	I0910 18:39:06.870484   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:fc:c0:f8 in network default
	I0910 18:39:06.871022   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:06.871048   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Ensuring networks are active...
	I0910 18:39:06.871756   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Ensuring network default is active
	I0910 18:39:06.872189   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Ensuring network mk-kubernetes-upgrade-192799 is active
	I0910 18:39:06.872747   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Getting domain xml...
	I0910 18:39:06.873536   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Creating domain...
	I0910 18:39:08.363850   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Waiting to get IP...
	I0910 18:39:08.364872   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:08.365251   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:08.365308   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:08.365244   48588 retry.go:31] will retry after 256.73944ms: waiting for machine to come up
	I0910 18:39:08.623860   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:08.624429   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:08.624467   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:08.624316   48588 retry.go:31] will retry after 373.202989ms: waiting for machine to come up
	I0910 18:39:08.998821   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:08.999280   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:08.999301   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:08.999252   48588 retry.go:31] will retry after 454.455825ms: waiting for machine to come up
	I0910 18:39:09.455389   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:09.455850   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:09.455889   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:09.455780   48588 retry.go:31] will retry after 554.607169ms: waiting for machine to come up
	I0910 18:39:10.011514   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:10.011940   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:10.011966   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:10.011899   48588 retry.go:31] will retry after 744.235197ms: waiting for machine to come up
	I0910 18:39:10.758053   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:10.758566   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:10.758588   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:10.758522   48588 retry.go:31] will retry after 603.593642ms: waiting for machine to come up
	I0910 18:39:11.364403   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:11.365198   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:11.365237   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:11.365173   48588 retry.go:31] will retry after 754.175245ms: waiting for machine to come up
	I0910 18:39:12.120618   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:12.121136   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:12.121164   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:12.121096   48588 retry.go:31] will retry after 1.050158349s: waiting for machine to come up
	I0910 18:39:13.173494   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:13.173925   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:13.173958   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:13.173884   48588 retry.go:31] will retry after 1.254181275s: waiting for machine to come up
	I0910 18:39:14.430075   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:14.430486   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:14.430520   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:14.430437   48588 retry.go:31] will retry after 1.842403649s: waiting for machine to come up
	I0910 18:39:16.275668   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:16.276088   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:16.276125   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:16.276032   48588 retry.go:31] will retry after 2.091478512s: waiting for machine to come up
	I0910 18:39:18.369860   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:18.370252   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:18.370286   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:18.370205   48588 retry.go:31] will retry after 2.928236497s: waiting for machine to come up
	I0910 18:39:21.301509   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:21.301883   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:21.301906   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:21.301833   48588 retry.go:31] will retry after 4.326932244s: waiting for machine to come up
	I0910 18:39:25.633251   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:25.633612   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find current IP address of domain kubernetes-upgrade-192799 in network mk-kubernetes-upgrade-192799
	I0910 18:39:25.633636   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | I0910 18:39:25.633567   48588 retry.go:31] will retry after 3.454608815s: waiting for machine to come up
	I0910 18:39:29.091813   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.092359   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Found IP for machine: 192.168.39.145
	I0910 18:39:29.092383   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Reserving static IP address...
	I0910 18:39:29.092397   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has current primary IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.092762   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-192799", mac: "52:54:00:2a:d1:04", ip: "192.168.39.145"} in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.164881   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Getting to WaitForSSH function...
	I0910 18:39:29.164911   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Reserved static IP address: 192.168.39.145
	I0910 18:39:29.164955   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Waiting for SSH to be available...
	I0910 18:39:29.167372   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.167794   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:29.167823   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.167955   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Using SSH client type: external
	I0910 18:39:29.167984   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa (-rw-------)
	I0910 18:39:29.168030   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:39:29.168048   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | About to run SSH command:
	I0910 18:39:29.168065   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | exit 0
	I0910 18:39:29.288968   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | SSH cmd err, output: <nil>: 
	I0910 18:39:29.289234   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) KVM machine creation complete!
	I0910 18:39:29.289518   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetConfigRaw
	I0910 18:39:29.290105   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:39:29.290273   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:39:29.290411   48523 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 18:39:29.290424   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetState
	I0910 18:39:29.291634   48523 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 18:39:29.291649   48523 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 18:39:29.291659   48523 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 18:39:29.291668   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:29.293782   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.294080   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:29.294108   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.294265   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:29.294454   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:29.294581   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:29.294695   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:29.294807   48523 main.go:141] libmachine: Using SSH client type: native
	I0910 18:39:29.295026   48523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:39:29.295038   48523 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 18:39:29.396388   48523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:39:29.396417   48523 main.go:141] libmachine: Detecting the provisioner...
	I0910 18:39:29.396430   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:29.399020   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.399309   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:29.399329   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.399496   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:29.399667   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:29.399838   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:29.399978   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:29.400124   48523 main.go:141] libmachine: Using SSH client type: native
	I0910 18:39:29.400298   48523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:39:29.400309   48523 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 18:39:29.501769   48523 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 18:39:29.501840   48523 main.go:141] libmachine: found compatible host: buildroot
	I0910 18:39:29.501849   48523 main.go:141] libmachine: Provisioning with buildroot...
	I0910 18:39:29.501857   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetMachineName
	I0910 18:39:29.502117   48523 buildroot.go:166] provisioning hostname "kubernetes-upgrade-192799"
	I0910 18:39:29.502147   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetMachineName
	I0910 18:39:29.502323   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:29.504717   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.504992   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:29.505028   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.505172   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:29.505356   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:29.505514   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:29.505655   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:29.505843   48523 main.go:141] libmachine: Using SSH client type: native
	I0910 18:39:29.506016   48523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:39:29.506032   48523 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-192799 && echo "kubernetes-upgrade-192799" | sudo tee /etc/hostname
	I0910 18:39:29.623670   48523 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-192799
	
	I0910 18:39:29.623703   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:29.626391   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.626685   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:29.626733   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.626917   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:29.627128   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:29.627286   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:29.627409   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:29.627573   48523 main.go:141] libmachine: Using SSH client type: native
	I0910 18:39:29.627799   48523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:39:29.627824   48523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-192799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-192799/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-192799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:39:29.738382   48523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:39:29.738408   48523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:39:29.738458   48523 buildroot.go:174] setting up certificates
	I0910 18:39:29.738471   48523 provision.go:84] configureAuth start
	I0910 18:39:29.738482   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetMachineName
	I0910 18:39:29.738784   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetIP
	I0910 18:39:29.741284   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.741640   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:29.741666   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.741791   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:29.744050   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.744375   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:29.744405   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:29.744500   48523 provision.go:143] copyHostCerts
	I0910 18:39:29.744561   48523 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:39:29.744574   48523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:39:29.744664   48523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:39:29.744808   48523 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:39:29.744821   48523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:39:29.744860   48523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:39:29.744960   48523 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:39:29.744976   48523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:39:29.745021   48523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:39:29.745122   48523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-192799 san=[127.0.0.1 192.168.39.145 kubernetes-upgrade-192799 localhost minikube]
	I0910 18:39:30.029521   48523 provision.go:177] copyRemoteCerts
	I0910 18:39:30.029580   48523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:39:30.029626   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:30.032201   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.032554   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:30.032582   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.032748   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:30.032983   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:30.033162   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:30.033319   48523 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa Username:docker}
	I0910 18:39:30.115610   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:39:30.143841   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0910 18:39:30.171034   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:39:30.197009   48523 provision.go:87] duration metric: took 458.523345ms to configureAuth
	I0910 18:39:30.197035   48523 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:39:30.197205   48523 config.go:182] Loaded profile config "kubernetes-upgrade-192799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:39:30.197291   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:30.199965   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.200358   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:30.200389   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.200546   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:30.200772   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:30.200928   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:30.201035   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:30.201188   48523 main.go:141] libmachine: Using SSH client type: native
	I0910 18:39:30.201396   48523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:39:30.201421   48523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:39:30.420059   48523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:39:30.420085   48523 main.go:141] libmachine: Checking connection to Docker...
	I0910 18:39:30.420096   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetURL
	I0910 18:39:30.421529   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | Using libvirt version 6000000
	I0910 18:39:30.423904   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.424237   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:30.424281   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.424401   48523 main.go:141] libmachine: Docker is up and running!
	I0910 18:39:30.424415   48523 main.go:141] libmachine: Reticulating splines...
	I0910 18:39:30.424424   48523 client.go:171] duration metric: took 24.126023115s to LocalClient.Create
	I0910 18:39:30.424453   48523 start.go:167] duration metric: took 24.126115429s to libmachine.API.Create "kubernetes-upgrade-192799"
	I0910 18:39:30.424465   48523 start.go:293] postStartSetup for "kubernetes-upgrade-192799" (driver="kvm2")
	I0910 18:39:30.424478   48523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:39:30.424501   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:39:30.424727   48523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:39:30.424766   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:30.426802   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.427087   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:30.427125   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.427251   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:30.427445   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:30.427627   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:30.427765   48523 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa Username:docker}
	I0910 18:39:30.507793   48523 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:39:30.512150   48523 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:39:30.512177   48523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:39:30.512246   48523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:39:30.512348   48523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:39:30.512466   48523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:39:30.521646   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:39:30.545183   48523 start.go:296] duration metric: took 120.704404ms for postStartSetup
	I0910 18:39:30.545280   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetConfigRaw
	I0910 18:39:30.545860   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetIP
	I0910 18:39:30.548187   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.548458   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:30.548480   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.548731   48523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/config.json ...
	I0910 18:39:30.548935   48523 start.go:128] duration metric: took 24.269298783s to createHost
	I0910 18:39:30.548959   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:30.550971   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.551331   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:30.551361   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.551488   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:30.551678   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:30.551832   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:30.551996   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:30.552152   48523 main.go:141] libmachine: Using SSH client type: native
	I0910 18:39:30.552314   48523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:39:30.552328   48523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:39:30.653903   48523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993570.633480404
	
	I0910 18:39:30.653924   48523 fix.go:216] guest clock: 1725993570.633480404
	I0910 18:39:30.653931   48523 fix.go:229] Guest: 2024-09-10 18:39:30.633480404 +0000 UTC Remote: 2024-09-10 18:39:30.548947255 +0000 UTC m=+24.390249005 (delta=84.533149ms)
	I0910 18:39:30.653968   48523 fix.go:200] guest clock delta is within tolerance: 84.533149ms
	I0910 18:39:30.653978   48523 start.go:83] releasing machines lock for "kubernetes-upgrade-192799", held for 24.37445032s
	I0910 18:39:30.654011   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:39:30.654324   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetIP
	I0910 18:39:30.657302   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.657627   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:30.657660   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.657742   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:39:30.658203   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:39:30.658375   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:39:30.658438   48523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:39:30.658504   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:30.658579   48523 ssh_runner.go:195] Run: cat /version.json
	I0910 18:39:30.658604   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:39:30.661238   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.661504   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.661583   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:30.661624   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.661712   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:30.661856   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:30.661876   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:30.661917   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:30.661981   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:39:30.662048   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:30.662126   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:39:30.662241   48523 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa Username:docker}
	I0910 18:39:30.662251   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:39:30.662407   48523 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa Username:docker}
	I0910 18:39:30.767934   48523 ssh_runner.go:195] Run: systemctl --version
	I0910 18:39:30.775812   48523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:39:30.935182   48523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:39:30.941488   48523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:39:30.941563   48523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:39:30.959129   48523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:39:30.959154   48523 start.go:495] detecting cgroup driver to use...
	I0910 18:39:30.959232   48523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:39:30.978340   48523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:39:30.995386   48523 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:39:30.995447   48523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:39:31.013133   48523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:39:31.027179   48523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:39:31.150956   48523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:39:31.320143   48523 docker.go:233] disabling docker service ...
	I0910 18:39:31.320218   48523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:39:31.334995   48523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:39:31.348239   48523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:39:31.467403   48523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:39:31.585002   48523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:39:31.598879   48523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:39:31.616853   48523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0910 18:39:31.616924   48523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:39:31.627179   48523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:39:31.627235   48523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:39:31.637396   48523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:39:31.647472   48523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:39:31.657855   48523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:39:31.668362   48523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:39:31.678856   48523 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:39:31.678907   48523 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:39:31.693541   48523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:39:31.704337   48523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:39:31.817576   48523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:39:31.928106   48523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:39:31.928177   48523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:39:31.933871   48523 start.go:563] Will wait 60s for crictl version
	I0910 18:39:31.933933   48523 ssh_runner.go:195] Run: which crictl
	I0910 18:39:31.937937   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:39:31.978198   48523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:39:31.978280   48523 ssh_runner.go:195] Run: crio --version
	I0910 18:39:32.017515   48523 ssh_runner.go:195] Run: crio --version
	I0910 18:39:32.048230   48523 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0910 18:39:32.049788   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetIP
	I0910 18:39:32.052997   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:32.053575   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:39:32.053608   48523 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:39:32.053886   48523 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 18:39:32.060481   48523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:39:32.075504   48523 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-192799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-192799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:39:32.075621   48523 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:39:32.075669   48523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:39:32.112439   48523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:39:32.112510   48523 ssh_runner.go:195] Run: which lz4
	I0910 18:39:32.119034   48523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:39:32.123685   48523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:39:32.123722   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0910 18:39:33.844591   48523 crio.go:462] duration metric: took 1.725590831s to copy over tarball
	I0910 18:39:33.844677   48523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:39:36.428251   48523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.583529369s)
	I0910 18:39:36.428298   48523 crio.go:469] duration metric: took 2.583675256s to extract the tarball
	I0910 18:39:36.428309   48523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:39:36.472973   48523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:39:36.517805   48523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:39:36.517830   48523 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:39:36.517894   48523 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:39:36.517931   48523 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:39:36.517965   48523 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0910 18:39:36.517927   48523 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:39:36.518019   48523 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:39:36.517958   48523 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:39:36.517951   48523 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0910 18:39:36.517895   48523 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:39:36.519611   48523 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:39:36.519626   48523 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0910 18:39:36.519636   48523 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 18:39:36.519666   48523 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:39:36.519669   48523 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:39:36.519611   48523 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:39:36.519702   48523 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:39:36.519929   48523 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:39:36.680973   48523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:39:36.684923   48523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:39:36.692034   48523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0910 18:39:36.694516   48523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:39:36.700730   48523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0910 18:39:36.720679   48523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 18:39:36.739649   48523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:39:36.741049   48523 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0910 18:39:36.741112   48523 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:39:36.741163   48523 ssh_runner.go:195] Run: which crictl
	I0910 18:39:36.829897   48523 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0910 18:39:36.829941   48523 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:39:36.829990   48523 ssh_runner.go:195] Run: which crictl
	I0910 18:39:36.865821   48523 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0910 18:39:36.865856   48523 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0910 18:39:36.865895   48523 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:39:36.865863   48523 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0910 18:39:36.865951   48523 ssh_runner.go:195] Run: which crictl
	I0910 18:39:36.865966   48523 ssh_runner.go:195] Run: which crictl
	I0910 18:39:36.877529   48523 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0910 18:39:36.877565   48523 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0910 18:39:36.877575   48523 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:39:36.877586   48523 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:39:36.877624   48523 ssh_runner.go:195] Run: which crictl
	I0910 18:39:36.877627   48523 ssh_runner.go:195] Run: which crictl
	I0910 18:39:36.877538   48523 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0910 18:39:36.877655   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:39:36.877669   48523 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0910 18:39:36.877696   48523 ssh_runner.go:195] Run: which crictl
	I0910 18:39:36.877709   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:39:36.877721   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:39:36.877793   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:39:36.949259   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:39:36.949285   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:39:36.949308   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:39:36.962531   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:39:36.962539   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:39:36.986587   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:39:37.001577   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:39:37.109944   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:39:37.109983   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:39:37.109944   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:39:37.113152   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:39:37.113226   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:39:37.149879   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:39:37.158799   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:39:37.258438   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:39:37.288769   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:39:37.288893   48523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0910 18:39:37.289800   48523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0910 18:39:37.289865   48523 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:39:37.289879   48523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0910 18:39:37.295749   48523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0910 18:39:37.321064   48523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:39:37.358058   48523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0910 18:39:37.358097   48523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0910 18:39:37.367907   48523 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0910 18:39:37.493357   48523 cache_images.go:92] duration metric: took 975.509659ms to LoadCachedImages
	W0910 18:39:37.493449   48523 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0910 18:39:37.493468   48523 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.20.0 crio true true} ...
	I0910 18:39:37.493717   48523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-192799 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-192799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:39:37.493801   48523 ssh_runner.go:195] Run: crio config
	I0910 18:39:37.548633   48523 cni.go:84] Creating CNI manager for ""
	I0910 18:39:37.548657   48523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:39:37.548669   48523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:39:37.548697   48523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-192799 NodeName:kubernetes-upgrade-192799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 18:39:37.548856   48523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-192799"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:39:37.548925   48523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0910 18:39:37.561841   48523 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:39:37.561925   48523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:39:37.572022   48523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0910 18:39:37.591429   48523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:39:37.608919   48523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0910 18:39:37.626866   48523 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0910 18:39:37.630845   48523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:39:37.643404   48523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:39:37.767457   48523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:39:37.784008   48523 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799 for IP: 192.168.39.145
	I0910 18:39:37.784046   48523 certs.go:194] generating shared ca certs ...
	I0910 18:39:37.784068   48523 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:39:37.784260   48523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:39:37.784326   48523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:39:37.784339   48523 certs.go:256] generating profile certs ...
	I0910 18:39:37.784411   48523 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/client.key
	I0910 18:39:37.784441   48523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/client.crt with IP's: []
	I0910 18:39:38.136446   48523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/client.crt ...
	I0910 18:39:38.136477   48523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/client.crt: {Name:mk9e53ee824d0e3403f7e525a14760f1d0d8edd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:39:38.136696   48523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/client.key ...
	I0910 18:39:38.136718   48523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/client.key: {Name:mk50180c851c16ef3f33b31d67cc85c3deb66423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:39:38.136848   48523 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.key.4e17f7c7
	I0910 18:39:38.136877   48523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.crt.4e17f7c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.145]
	I0910 18:39:38.352886   48523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.crt.4e17f7c7 ...
	I0910 18:39:38.352919   48523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.crt.4e17f7c7: {Name:mkf94a632ccf290f23ef75bb60dce4e3963ebbaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:39:38.353096   48523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.key.4e17f7c7 ...
	I0910 18:39:38.353117   48523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.key.4e17f7c7: {Name:mkcde407a7d654df026b080966ee1b02408c820d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:39:38.353203   48523 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.crt.4e17f7c7 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.crt
	I0910 18:39:38.353279   48523 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.key.4e17f7c7 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.key
	I0910 18:39:38.353336   48523 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.key
	I0910 18:39:38.353352   48523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.crt with IP's: []
	I0910 18:39:38.484410   48523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.crt ...
	I0910 18:39:38.484437   48523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.crt: {Name:mkb1dbf476c9c2976b6f92a1d9979afdbc10b0e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:39:38.484589   48523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.key ...
	I0910 18:39:38.484600   48523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.key: {Name:mk07d6333bc8a416be35823f43bcd27de6b9753f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:39:38.484772   48523 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:39:38.484807   48523 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:39:38.484817   48523 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:39:38.484840   48523 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:39:38.484861   48523 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:39:38.484882   48523 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:39:38.484916   48523 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:39:38.485484   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:39:38.512262   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:39:38.536939   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:39:38.561383   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:39:38.591702   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0910 18:39:38.647733   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:39:38.692465   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:39:38.725043   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:39:38.752874   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:39:38.777083   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:39:38.800574   48523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:39:38.825278   48523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:39:38.845270   48523 ssh_runner.go:195] Run: openssl version
	I0910 18:39:38.851771   48523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:39:38.863576   48523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:39:38.868244   48523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:39:38.868304   48523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:39:38.874585   48523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:39:38.887575   48523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:39:38.899692   48523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:39:38.904458   48523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:39:38.904512   48523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:39:38.910410   48523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:39:38.921524   48523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:39:38.932578   48523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:39:38.937158   48523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:39:38.937227   48523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:39:38.942890   48523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:39:38.953807   48523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:39:38.957946   48523 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:39:38.957995   48523 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-192799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-192799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:39:38.958060   48523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:39:38.958123   48523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:39:38.997209   48523 cri.go:89] found id: ""
	I0910 18:39:38.997286   48523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:39:39.007637   48523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:39:39.017980   48523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:39:39.027840   48523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:39:39.027862   48523 kubeadm.go:157] found existing configuration files:
	
	I0910 18:39:39.027909   48523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:39:39.036917   48523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:39:39.036985   48523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:39:39.046188   48523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:39:39.054935   48523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:39:39.055000   48523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:39:39.066408   48523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:39:39.077430   48523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:39:39.077496   48523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:39:39.088535   48523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:39:39.099507   48523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:39:39.099556   48523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:39:39.110745   48523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 18:39:39.396979   48523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 18:41:37.149280   48523 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 18:41:37.149388   48523 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 18:41:37.151266   48523 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 18:41:37.151328   48523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 18:41:37.151408   48523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 18:41:37.151527   48523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 18:41:37.151639   48523 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 18:41:37.151725   48523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 18:41:37.154806   48523 out.go:235]   - Generating certificates and keys ...
	I0910 18:41:37.154894   48523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 18:41:37.154995   48523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 18:41:37.155128   48523 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 18:41:37.155215   48523 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 18:41:37.155299   48523 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 18:41:37.155374   48523 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 18:41:37.155458   48523 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 18:41:37.155613   48523 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-192799 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I0910 18:41:37.155660   48523 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 18:41:37.155809   48523 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-192799 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I0910 18:41:37.155899   48523 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 18:41:37.155980   48523 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 18:41:37.156043   48523 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 18:41:37.156106   48523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 18:41:37.156148   48523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 18:41:37.156203   48523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 18:41:37.156260   48523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 18:41:37.156305   48523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 18:41:37.156400   48523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 18:41:37.156483   48523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 18:41:37.156558   48523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 18:41:37.156645   48523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 18:41:37.158063   48523 out.go:235]   - Booting up control plane ...
	I0910 18:41:37.158164   48523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 18:41:37.158280   48523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 18:41:37.158378   48523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 18:41:37.158472   48523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 18:41:37.158777   48523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 18:41:37.158847   48523 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 18:41:37.158929   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:41:37.159292   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:41:37.159390   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:41:37.159572   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:41:37.159632   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:41:37.159798   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:41:37.159891   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:41:37.160086   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:41:37.160152   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:41:37.160319   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:41:37.160329   48523 kubeadm.go:310] 
	I0910 18:41:37.160362   48523 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 18:41:37.160397   48523 kubeadm.go:310] 		timed out waiting for the condition
	I0910 18:41:37.160404   48523 kubeadm.go:310] 
	I0910 18:41:37.160432   48523 kubeadm.go:310] 	This error is likely caused by:
	I0910 18:41:37.160465   48523 kubeadm.go:310] 		- The kubelet is not running
	I0910 18:41:37.160571   48523 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 18:41:37.160579   48523 kubeadm.go:310] 
	I0910 18:41:37.160666   48523 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 18:41:37.160696   48523 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 18:41:37.160724   48523 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 18:41:37.160730   48523 kubeadm.go:310] 
	I0910 18:41:37.160831   48523 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 18:41:37.160899   48523 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 18:41:37.160905   48523 kubeadm.go:310] 
	I0910 18:41:37.160986   48523 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 18:41:37.161061   48523 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 18:41:37.161150   48523 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 18:41:37.161215   48523 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 18:41:37.161237   48523 kubeadm.go:310] 
	W0910 18:41:37.161325   48523 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-192799 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-192799 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-192799 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-192799 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0910 18:41:37.161360   48523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 18:41:38.134571   48523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:41:38.153747   48523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:41:38.166052   48523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:41:38.166083   48523 kubeadm.go:157] found existing configuration files:
	
	I0910 18:41:38.166150   48523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:41:38.176203   48523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:41:38.176267   48523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:41:38.186171   48523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:41:38.195334   48523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:41:38.195402   48523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:41:38.207265   48523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:41:38.219841   48523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:41:38.219913   48523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:41:38.233751   48523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:41:38.246209   48523 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:41:38.246281   48523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:41:38.260480   48523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 18:41:38.355943   48523 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 18:41:38.356079   48523 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 18:41:38.531134   48523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 18:41:38.531297   48523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 18:41:38.531426   48523 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 18:41:38.762967   48523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 18:41:38.765995   48523 out.go:235]   - Generating certificates and keys ...
	I0910 18:41:38.766095   48523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 18:41:38.766181   48523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 18:41:38.766285   48523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 18:41:38.766370   48523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 18:41:38.766463   48523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 18:41:38.766539   48523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 18:41:38.766616   48523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 18:41:38.766703   48523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 18:41:38.766804   48523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 18:41:38.766905   48523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 18:41:38.766957   48523 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 18:41:38.767031   48523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 18:41:39.014586   48523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 18:41:39.129372   48523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 18:41:39.379624   48523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 18:41:39.875928   48523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 18:41:39.890599   48523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 18:41:39.891620   48523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 18:41:39.891697   48523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 18:41:40.043674   48523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 18:41:40.045621   48523 out.go:235]   - Booting up control plane ...
	I0910 18:41:40.045750   48523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 18:41:40.051010   48523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 18:41:40.051981   48523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 18:41:40.052646   48523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 18:41:40.064499   48523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 18:42:20.066189   48523 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 18:42:20.066314   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:42:20.066514   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:42:25.066984   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:42:25.067256   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:42:35.067794   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:42:35.068039   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:42:55.069217   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:42:55.069480   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:43:35.069032   48523 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:43:35.069266   48523 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:43:35.069296   48523 kubeadm.go:310] 
	I0910 18:43:35.069365   48523 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 18:43:35.069420   48523 kubeadm.go:310] 		timed out waiting for the condition
	I0910 18:43:35.069432   48523 kubeadm.go:310] 
	I0910 18:43:35.069477   48523 kubeadm.go:310] 	This error is likely caused by:
	I0910 18:43:35.069528   48523 kubeadm.go:310] 		- The kubelet is not running
	I0910 18:43:35.069669   48523 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 18:43:35.069681   48523 kubeadm.go:310] 
	I0910 18:43:35.069821   48523 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 18:43:35.069895   48523 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 18:43:35.069956   48523 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 18:43:35.069964   48523 kubeadm.go:310] 
	I0910 18:43:35.070103   48523 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 18:43:35.070225   48523 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 18:43:35.070238   48523 kubeadm.go:310] 
	I0910 18:43:35.070439   48523 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 18:43:35.070571   48523 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 18:43:35.070677   48523 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 18:43:35.070788   48523 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 18:43:35.070798   48523 kubeadm.go:310] 
	I0910 18:43:35.071369   48523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 18:43:35.071493   48523 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 18:43:35.071608   48523 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 18:43:35.071634   48523 kubeadm.go:394] duration metric: took 3m56.11364333s to StartCluster
	I0910 18:43:35.071685   48523 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 18:43:35.071748   48523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 18:43:35.116954   48523 cri.go:89] found id: ""
	I0910 18:43:35.116986   48523 logs.go:276] 0 containers: []
	W0910 18:43:35.117002   48523 logs.go:278] No container was found matching "kube-apiserver"
	I0910 18:43:35.117011   48523 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 18:43:35.117101   48523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 18:43:35.154010   48523 cri.go:89] found id: ""
	I0910 18:43:35.154032   48523 logs.go:276] 0 containers: []
	W0910 18:43:35.154039   48523 logs.go:278] No container was found matching "etcd"
	I0910 18:43:35.154045   48523 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 18:43:35.154094   48523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 18:43:35.187471   48523 cri.go:89] found id: ""
	I0910 18:43:35.187502   48523 logs.go:276] 0 containers: []
	W0910 18:43:35.187520   48523 logs.go:278] No container was found matching "coredns"
	I0910 18:43:35.187528   48523 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 18:43:35.187600   48523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 18:43:35.222073   48523 cri.go:89] found id: ""
	I0910 18:43:35.222104   48523 logs.go:276] 0 containers: []
	W0910 18:43:35.222115   48523 logs.go:278] No container was found matching "kube-scheduler"
	I0910 18:43:35.222123   48523 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 18:43:35.222186   48523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 18:43:35.257409   48523 cri.go:89] found id: ""
	I0910 18:43:35.257436   48523 logs.go:276] 0 containers: []
	W0910 18:43:35.257444   48523 logs.go:278] No container was found matching "kube-proxy"
	I0910 18:43:35.257450   48523 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 18:43:35.257500   48523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 18:43:35.295725   48523 cri.go:89] found id: ""
	I0910 18:43:35.295748   48523 logs.go:276] 0 containers: []
	W0910 18:43:35.295758   48523 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 18:43:35.295766   48523 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 18:43:35.295829   48523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 18:43:35.332371   48523 cri.go:89] found id: ""
	I0910 18:43:35.332404   48523 logs.go:276] 0 containers: []
	W0910 18:43:35.332412   48523 logs.go:278] No container was found matching "kindnet"
	I0910 18:43:35.332424   48523 logs.go:123] Gathering logs for describe nodes ...
	I0910 18:43:35.332453   48523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 18:43:35.448274   48523 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 18:43:35.448293   48523 logs.go:123] Gathering logs for CRI-O ...
	I0910 18:43:35.448305   48523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 18:43:35.550531   48523 logs.go:123] Gathering logs for container status ...
	I0910 18:43:35.550571   48523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 18:43:35.605344   48523 logs.go:123] Gathering logs for kubelet ...
	I0910 18:43:35.605378   48523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 18:43:35.661045   48523 logs.go:123] Gathering logs for dmesg ...
	I0910 18:43:35.661079   48523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0910 18:43:35.676072   48523 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0910 18:43:35.676118   48523 out.go:270] * 
	* 
	W0910 18:43:35.676176   48523 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 18:43:35.676191   48523 out.go:270] * 
	* 
	W0910 18:43:35.677021   48523 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:43:35.679996   48523 out.go:201] 
	W0910 18:43:35.680943   48523 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 18:43:35.681000   48523 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0910 18:43:35.681035   48523 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0910 18:43:35.682383   48523 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-192799 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-192799
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-192799: (1.382969935s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-192799 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-192799 status --format={{.Host}}: exit status 7 (60.565539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-192799 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0910 18:43:39.607312   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:56.538019   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-192799 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.471080928s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-192799 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-192799 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-192799 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (73.368918ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-192799] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-192799
	    minikube start -p kubernetes-upgrade-192799 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1927992 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-192799 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-192799 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-192799 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m28.916443965s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-10 18:46:36.699952662 +0000 UTC m=+4656.523727457
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-192799 -n kubernetes-upgrade-192799
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-192799 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-192799 logs -n 25: (1.895709727s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-229565                | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:42 UTC |
	| start   | -p NoKubernetes-229565                | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:43 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-926585             | running-upgrade-926585    | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	| start   | -p force-systemd-flag-652506          | force-systemd-flag-652506 | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:44 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-229565 sudo           | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-229565                | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	| start   | -p NoKubernetes-229565                | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:44 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-459729                       | pause-459729              | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	| start   | -p force-systemd-env-156940           | force-systemd-env-156940  | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:44 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-192799          | kubernetes-upgrade-192799 | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	| start   | -p kubernetes-upgrade-192799          | kubernetes-upgrade-192799 | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:45 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-652506 ssh cat     | force-systemd-flag-652506 | jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:44 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-652506          | force-systemd-flag-652506 | jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:44 UTC |
	| start   | -p cert-expiration-333713             | cert-expiration-333713    | jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:45 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-229565 sudo           | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:44 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-229565                | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:44 UTC |
	| start   | -p cert-options-331722                | cert-options-331722       | jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:46 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-156940           | force-systemd-env-156940  | jenkins | v1.34.0 | 10 Sep 24 18:44 UTC | 10 Sep 24 18:44 UTC |
	| start   | -p auto-642043 --memory=3072          | auto-642043               | jenkins | v1.34.0 | 10 Sep 24 18:44 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-192799          | kubernetes-upgrade-192799 | jenkins | v1.34.0 | 10 Sep 24 18:45 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-192799          | kubernetes-upgrade-192799 | jenkins | v1.34.0 | 10 Sep 24 18:45 UTC | 10 Sep 24 18:46 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-331722 ssh               | cert-options-331722       | jenkins | v1.34.0 | 10 Sep 24 18:46 UTC | 10 Sep 24 18:46 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-331722 -- sudo        | cert-options-331722       | jenkins | v1.34.0 | 10 Sep 24 18:46 UTC | 10 Sep 24 18:46 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-331722                | cert-options-331722       | jenkins | v1.34.0 | 10 Sep 24 18:46 UTC | 10 Sep 24 18:46 UTC |
	| start   | -p kindnet-642043                     | kindnet-642043            | jenkins | v1.34.0 | 10 Sep 24 18:46 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:46:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:46:01.300831   56745 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:46:01.301131   56745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:46:01.301142   56745 out.go:358] Setting ErrFile to fd 2...
	I0910 18:46:01.301147   56745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:46:01.301313   56745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:46:01.301889   56745 out.go:352] Setting JSON to false
	I0910 18:46:01.302837   56745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5313,"bootTime":1725988648,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:46:01.302894   56745 start.go:139] virtualization: kvm guest
	I0910 18:46:01.305830   56745 out.go:177] * [kindnet-642043] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:46:01.307131   56745 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:46:01.307127   56745 notify.go:220] Checking for updates...
	I0910 18:46:01.309669   56745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:46:01.310947   56745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:46:01.312153   56745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:46:01.313375   56745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:46:01.314594   56745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:46:01.316058   56745 config.go:182] Loaded profile config "auto-642043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:46:01.316144   56745 config.go:182] Loaded profile config "cert-expiration-333713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:46:01.316236   56745 config.go:182] Loaded profile config "kubernetes-upgrade-192799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:46:01.316309   56745 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:46:01.351563   56745 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 18:46:01.352709   56745 start.go:297] selected driver: kvm2
	I0910 18:46:01.352721   56745 start.go:901] validating driver "kvm2" against <nil>
	I0910 18:46:01.352733   56745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:46:01.353484   56745 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:46:01.353553   56745 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:46:01.367572   56745 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:46:01.367610   56745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 18:46:01.367797   56745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:46:01.367848   56745 cni.go:84] Creating CNI manager for "kindnet"
	I0910 18:46:01.367856   56745 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 18:46:01.367901   56745 start.go:340] cluster config:
	{Name:kindnet-642043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-642043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:46:01.367995   56745 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:46:01.369687   56745 out.go:177] * Starting "kindnet-642043" primary control-plane node in "kindnet-642043" cluster
	I0910 18:46:03.170242   56007 start.go:364] duration metric: took 55.25128795s to acquireMachinesLock for "kubernetes-upgrade-192799"
	I0910 18:46:03.170303   56007 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:46:03.170316   56007 fix.go:54] fixHost starting: 
	I0910 18:46:03.170771   56007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:46:03.170823   56007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:46:03.190287   56007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35261
	I0910 18:46:03.190664   56007 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:46:03.191122   56007 main.go:141] libmachine: Using API Version  1
	I0910 18:46:03.191143   56007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:46:03.191413   56007 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:46:03.191580   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:46:03.191697   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetState
	I0910 18:46:03.193186   56007 fix.go:112] recreateIfNeeded on kubernetes-upgrade-192799: state=Running err=<nil>
	W0910 18:46:03.193201   56007 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:46:03.195108   56007 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-192799" VM ...
	I0910 18:46:01.715557   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:01.716178   55718 main.go:141] libmachine: (auto-642043) Found IP for machine: 192.168.72.99
	I0910 18:46:01.716201   55718 main.go:141] libmachine: (auto-642043) Reserving static IP address...
	I0910 18:46:01.716214   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has current primary IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:01.716619   55718 main.go:141] libmachine: (auto-642043) DBG | unable to find host DHCP lease matching {name: "auto-642043", mac: "52:54:00:0e:63:50", ip: "192.168.72.99"} in network mk-auto-642043
	I0910 18:46:01.794282   55718 main.go:141] libmachine: (auto-642043) DBG | Getting to WaitForSSH function...
	I0910 18:46:01.794305   55718 main.go:141] libmachine: (auto-642043) Reserved static IP address: 192.168.72.99
	I0910 18:46:01.794317   55718 main.go:141] libmachine: (auto-642043) Waiting for SSH to be available...
	I0910 18:46:01.797046   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:01.797452   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:01.797474   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:01.797661   55718 main.go:141] libmachine: (auto-642043) DBG | Using SSH client type: external
	I0910 18:46:01.797690   55718 main.go:141] libmachine: (auto-642043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/auto-642043/id_rsa (-rw-------)
	I0910 18:46:01.797737   55718 main.go:141] libmachine: (auto-642043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/auto-642043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:46:01.797757   55718 main.go:141] libmachine: (auto-642043) DBG | About to run SSH command:
	I0910 18:46:01.797788   55718 main.go:141] libmachine: (auto-642043) DBG | exit 0
	I0910 18:46:01.929340   55718 main.go:141] libmachine: (auto-642043) DBG | SSH cmd err, output: <nil>: 
	I0910 18:46:01.929634   55718 main.go:141] libmachine: (auto-642043) KVM machine creation complete!
	I0910 18:46:01.929917   55718 main.go:141] libmachine: (auto-642043) Calling .GetConfigRaw
	I0910 18:46:01.930430   55718 main.go:141] libmachine: (auto-642043) Calling .DriverName
	I0910 18:46:01.930649   55718 main.go:141] libmachine: (auto-642043) Calling .DriverName
	I0910 18:46:01.930871   55718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 18:46:01.930914   55718 main.go:141] libmachine: (auto-642043) Calling .GetState
	I0910 18:46:01.932209   55718 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 18:46:01.932224   55718 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 18:46:01.932230   55718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 18:46:01.932235   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:01.934588   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:01.934945   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:01.934969   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:01.935176   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:01.935331   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:01.935467   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:01.935587   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:01.935741   55718 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:01.935991   55718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.99 22 <nil> <nil>}
	I0910 18:46:01.936009   55718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 18:46:02.044291   55718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:46:02.044322   55718 main.go:141] libmachine: Detecting the provisioner...
	I0910 18:46:02.044330   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:02.047140   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.047518   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:02.047555   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.047659   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:02.047909   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.048072   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.048284   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:02.048450   55718 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:02.048667   55718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.99 22 <nil> <nil>}
	I0910 18:46:02.048680   55718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 18:46:02.162127   55718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 18:46:02.162198   55718 main.go:141] libmachine: found compatible host: buildroot
	I0910 18:46:02.162208   55718 main.go:141] libmachine: Provisioning with buildroot...
	I0910 18:46:02.162218   55718 main.go:141] libmachine: (auto-642043) Calling .GetMachineName
	I0910 18:46:02.162553   55718 buildroot.go:166] provisioning hostname "auto-642043"
	I0910 18:46:02.162583   55718 main.go:141] libmachine: (auto-642043) Calling .GetMachineName
	I0910 18:46:02.162793   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:02.165501   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.165841   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:02.165875   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.166005   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:02.166174   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.166274   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.166408   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:02.166532   55718 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:02.166749   55718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.99 22 <nil> <nil>}
	I0910 18:46:02.166767   55718 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-642043 && echo "auto-642043" | sudo tee /etc/hostname
	I0910 18:46:02.294529   55718 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-642043
	
	I0910 18:46:02.294566   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:02.297161   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.297577   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:02.297610   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.297781   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:02.297968   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.298152   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.298320   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:02.298508   55718 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:02.298698   55718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.99 22 <nil> <nil>}
	I0910 18:46:02.298720   55718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-642043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-642043/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-642043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:46:02.418788   55718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:46:02.418819   55718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:46:02.418881   55718 buildroot.go:174] setting up certificates
	I0910 18:46:02.418892   55718 provision.go:84] configureAuth start
	I0910 18:46:02.418906   55718 main.go:141] libmachine: (auto-642043) Calling .GetMachineName
	I0910 18:46:02.419153   55718 main.go:141] libmachine: (auto-642043) Calling .GetIP
	I0910 18:46:02.421960   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.422344   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:02.422378   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.422562   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:02.424817   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.425144   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:02.425166   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.425342   55718 provision.go:143] copyHostCerts
	I0910 18:46:02.425406   55718 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:46:02.425419   55718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:46:02.425483   55718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:46:02.425613   55718 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:46:02.425623   55718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:46:02.425653   55718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:46:02.425740   55718 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:46:02.425749   55718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:46:02.425778   55718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:46:02.425868   55718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.auto-642043 san=[127.0.0.1 192.168.72.99 auto-642043 localhost minikube]
	I0910 18:46:02.500658   55718 provision.go:177] copyRemoteCerts
	I0910 18:46:02.500747   55718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:46:02.500778   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:02.503281   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.503610   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:02.503640   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.503795   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:02.503970   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.504102   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:02.504213   55718 sshutil.go:53] new ssh client: &{IP:192.168.72.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/auto-642043/id_rsa Username:docker}
	I0910 18:46:02.593064   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:46:02.617616   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0910 18:46:02.641286   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:46:02.664981   55718 provision.go:87] duration metric: took 246.072242ms to configureAuth
	I0910 18:46:02.665012   55718 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:46:02.665197   55718 config.go:182] Loaded profile config "auto-642043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:46:02.665278   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:02.667783   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.668141   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:02.668159   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.668538   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:02.668740   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.668901   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.669004   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:02.669174   55718 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:02.669351   55718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.99 22 <nil> <nil>}
	I0910 18:46:02.669368   55718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:46:02.907788   55718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:46:02.907822   55718 main.go:141] libmachine: Checking connection to Docker...
	I0910 18:46:02.907834   55718 main.go:141] libmachine: (auto-642043) Calling .GetURL
	I0910 18:46:02.909087   55718 main.go:141] libmachine: (auto-642043) DBG | Using libvirt version 6000000
	I0910 18:46:02.911666   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.911998   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:02.912020   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.912282   55718 main.go:141] libmachine: Docker is up and running!
	I0910 18:46:02.912307   55718 main.go:141] libmachine: Reticulating splines...
	I0910 18:46:02.912316   55718 client.go:171] duration metric: took 22.050234494s to LocalClient.Create
	I0910 18:46:02.912339   55718 start.go:167] duration metric: took 22.050295321s to libmachine.API.Create "auto-642043"
	I0910 18:46:02.912351   55718 start.go:293] postStartSetup for "auto-642043" (driver="kvm2")
	I0910 18:46:02.912361   55718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:46:02.912380   55718 main.go:141] libmachine: (auto-642043) Calling .DriverName
	I0910 18:46:02.912692   55718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:46:02.912714   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:02.914993   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.915342   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:02.915362   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:02.915514   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:02.915695   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:02.915830   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:02.915948   55718 sshutil.go:53] new ssh client: &{IP:192.168.72.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/auto-642043/id_rsa Username:docker}
	I0910 18:46:03.005726   55718 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:46:03.010336   55718 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:46:03.010365   55718 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:46:03.010426   55718 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:46:03.010527   55718 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:46:03.010645   55718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:46:03.020261   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:46:03.052471   55718 start.go:296] duration metric: took 140.107746ms for postStartSetup
	I0910 18:46:03.052517   55718 main.go:141] libmachine: (auto-642043) Calling .GetConfigRaw
	I0910 18:46:03.053217   55718 main.go:141] libmachine: (auto-642043) Calling .GetIP
	I0910 18:46:03.055842   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.056195   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:03.056220   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.056609   55718 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/config.json ...
	I0910 18:46:03.056909   55718 start.go:128] duration metric: took 22.2186199s to createHost
	I0910 18:46:03.056938   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:03.059312   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.059639   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:03.059663   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.059790   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:03.059965   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:03.060122   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:03.060214   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:03.060333   55718 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:03.060504   55718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.99 22 <nil> <nil>}
	I0910 18:46:03.060521   55718 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:46:03.170070   55718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993963.146595078
	
	I0910 18:46:03.170098   55718 fix.go:216] guest clock: 1725993963.146595078
	I0910 18:46:03.170108   55718 fix.go:229] Guest: 2024-09-10 18:46:03.146595078 +0000 UTC Remote: 2024-09-10 18:46:03.05692408 +0000 UTC m=+72.993496410 (delta=89.670998ms)
	I0910 18:46:03.170133   55718 fix.go:200] guest clock delta is within tolerance: 89.670998ms
	I0910 18:46:03.170141   55718 start.go:83] releasing machines lock for "auto-642043", held for 22.332035053s
	I0910 18:46:03.170174   55718 main.go:141] libmachine: (auto-642043) Calling .DriverName
	I0910 18:46:03.170444   55718 main.go:141] libmachine: (auto-642043) Calling .GetIP
	I0910 18:46:03.173396   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.173811   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:03.173837   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.173975   55718 main.go:141] libmachine: (auto-642043) Calling .DriverName
	I0910 18:46:03.174447   55718 main.go:141] libmachine: (auto-642043) Calling .DriverName
	I0910 18:46:03.174653   55718 main.go:141] libmachine: (auto-642043) Calling .DriverName
	I0910 18:46:03.174734   55718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:46:03.174781   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:03.174894   55718 ssh_runner.go:195] Run: cat /version.json
	I0910 18:46:03.174918   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHHostname
	I0910 18:46:03.177345   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.177365   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.177729   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:03.177757   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:03.177780   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.177796   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:03.177935   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:03.178055   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHPort
	I0910 18:46:03.178150   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:03.178259   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHKeyPath
	I0910 18:46:03.178342   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:03.178426   55718 main.go:141] libmachine: (auto-642043) Calling .GetSSHUsername
	I0910 18:46:03.178480   55718 sshutil.go:53] new ssh client: &{IP:192.168.72.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/auto-642043/id_rsa Username:docker}
	I0910 18:46:03.178539   55718 sshutil.go:53] new ssh client: &{IP:192.168.72.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/auto-642043/id_rsa Username:docker}
	I0910 18:46:03.285383   55718 ssh_runner.go:195] Run: systemctl --version
	I0910 18:46:03.291429   55718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:46:03.464604   55718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:46:03.472213   55718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:46:03.472287   55718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:46:03.488302   55718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:46:03.488324   55718 start.go:495] detecting cgroup driver to use...
	I0910 18:46:03.488394   55718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:46:03.505578   55718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:46:03.519633   55718 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:46:03.519717   55718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:46:03.537510   55718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:46:03.552184   55718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:46:03.674500   55718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:46:03.848917   55718 docker.go:233] disabling docker service ...
	I0910 18:46:03.848976   55718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:46:03.866254   55718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:46:03.880712   55718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:46:04.002282   55718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:46:04.124666   55718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:46:04.139065   55718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:46:04.160676   55718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:46:04.160734   55718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:04.171550   55718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:46:04.171632   55718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:04.182398   55718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:04.193002   55718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:04.203061   55718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:46:04.213830   55718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:04.224105   55718 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:04.241471   55718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:04.251425   55718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:46:04.260738   55718 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:46:04.260784   55718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:46:04.273866   55718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:46:04.283089   55718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:46:04.406693   55718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:46:04.493129   55718 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:46:04.493207   55718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:46:04.498112   55718 start.go:563] Will wait 60s for crictl version
	I0910 18:46:04.498175   55718 ssh_runner.go:195] Run: which crictl
	I0910 18:46:04.501771   55718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:46:04.539075   55718 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:46:04.539160   55718 ssh_runner.go:195] Run: crio --version
	I0910 18:46:04.566219   55718 ssh_runner.go:195] Run: crio --version
	I0910 18:46:04.595188   55718 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:46:04.596226   55718 main.go:141] libmachine: (auto-642043) Calling .GetIP
	I0910 18:46:04.598928   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:04.599234   55718 main.go:141] libmachine: (auto-642043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:63:50", ip: ""} in network mk-auto-642043: {Iface:virbr4 ExpiryTime:2024-09-10 19:45:56 +0000 UTC Type:0 Mac:52:54:00:0e:63:50 Iaid: IPaddr:192.168.72.99 Prefix:24 Hostname:auto-642043 Clientid:01:52:54:00:0e:63:50}
	I0910 18:46:04.599260   55718 main.go:141] libmachine: (auto-642043) DBG | domain auto-642043 has defined IP address 192.168.72.99 and MAC address 52:54:00:0e:63:50 in network mk-auto-642043
	I0910 18:46:04.599416   55718 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0910 18:46:04.603712   55718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:46:04.615837   55718 kubeadm.go:883] updating cluster {Name:auto-642043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:auto-642043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.99 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:46:04.615940   55718 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:46:04.615993   55718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:46:04.647488   55718 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:46:04.647563   55718 ssh_runner.go:195] Run: which lz4
	I0910 18:46:04.651446   55718 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:46:04.655594   55718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:46:04.655617   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 18:46:01.370995   56745 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:46:01.371028   56745 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:46:01.371035   56745 cache.go:56] Caching tarball of preloaded images
	I0910 18:46:01.371110   56745 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:46:01.371123   56745 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 18:46:01.371210   56745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/config.json ...
	I0910 18:46:01.371228   56745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/config.json: {Name:mk7a5f14f480a8ed0c2a92c4904404c70af46238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:46:01.371365   56745 start.go:360] acquireMachinesLock for kindnet-642043: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:46:03.196450   56007 machine.go:93] provisionDockerMachine start ...
	I0910 18:46:03.196485   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:46:03.196675   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:03.199731   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.200194   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:03.200225   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.200367   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:46:03.200527   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:03.200692   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:03.200825   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:46:03.201008   56007 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:03.201225   56007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:46:03.201237   56007 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:46:03.302804   56007 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-192799
	
	I0910 18:46:03.302831   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetMachineName
	I0910 18:46:03.303060   56007 buildroot.go:166] provisioning hostname "kubernetes-upgrade-192799"
	I0910 18:46:03.303087   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetMachineName
	I0910 18:46:03.303258   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:03.306306   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.306748   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:03.306776   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.306904   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:46:03.307066   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:03.307236   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:03.307416   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:46:03.307621   56007 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:03.307815   56007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:46:03.307832   56007 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-192799 && echo "kubernetes-upgrade-192799" | sudo tee /etc/hostname
	I0910 18:46:03.424648   56007 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-192799
	
	I0910 18:46:03.424681   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:03.427289   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.427663   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:03.427704   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.427820   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:46:03.428004   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:03.428182   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:03.428342   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:46:03.428494   56007 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:03.428674   56007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:46:03.428690   56007 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-192799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-192799/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-192799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:46:03.531640   56007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:46:03.531667   56007 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:46:03.531705   56007 buildroot.go:174] setting up certificates
	I0910 18:46:03.531721   56007 provision.go:84] configureAuth start
	I0910 18:46:03.531737   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetMachineName
	I0910 18:46:03.532008   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetIP
	I0910 18:46:03.534446   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.534834   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:03.534869   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.535010   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:03.537122   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.537541   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:03.537565   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.537736   56007 provision.go:143] copyHostCerts
	I0910 18:46:03.537789   56007 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:46:03.537801   56007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:46:03.537861   56007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:46:03.537986   56007 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:46:03.537997   56007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:46:03.538026   56007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:46:03.538110   56007 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:46:03.538122   56007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:46:03.538149   56007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:46:03.538248   56007 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-192799 san=[127.0.0.1 192.168.39.145 kubernetes-upgrade-192799 localhost minikube]
	I0910 18:46:03.712751   56007 provision.go:177] copyRemoteCerts
	I0910 18:46:03.712809   56007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:46:03.712829   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:03.715759   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.716190   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:03.716217   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.716415   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:46:03.716587   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:03.716765   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:46:03.716880   56007 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa Username:docker}
	I0910 18:46:03.801440   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:46:03.826928   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0910 18:46:03.851176   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 18:46:03.882663   56007 provision.go:87] duration metric: took 350.927769ms to configureAuth
	I0910 18:46:03.882689   56007 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:46:03.882864   56007 config.go:182] Loaded profile config "kubernetes-upgrade-192799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:46:03.882965   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:03.885608   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.885970   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:03.885995   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:03.886209   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:46:03.886422   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:03.886629   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:03.886818   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:46:03.886992   56007 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:03.887198   56007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:46:03.887218   56007 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:46:05.999686   55718 crio.go:462] duration metric: took 1.348302397s to copy over tarball
	I0910 18:46:05.999759   55718 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:46:08.193240   55718 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.193454516s)
	I0910 18:46:08.193272   55718 crio.go:469] duration metric: took 2.193560525s to extract the tarball
	I0910 18:46:08.193280   55718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:46:08.230127   55718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:46:08.272299   55718 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:46:08.272323   55718 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:46:08.272332   55718 kubeadm.go:934] updating node { 192.168.72.99 8443 v1.31.0 crio true true} ...
	I0910 18:46:08.272477   55718 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-642043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:auto-642043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:46:08.272569   55718 ssh_runner.go:195] Run: crio config
	I0910 18:46:08.322627   55718 cni.go:84] Creating CNI manager for ""
	I0910 18:46:08.322653   55718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:46:08.322664   55718 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:46:08.322685   55718 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.99 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-642043 NodeName:auto-642043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:46:08.322807   55718 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-642043"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:46:08.322862   55718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:46:08.333145   55718 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:46:08.333210   55718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:46:08.343278   55718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0910 18:46:08.360292   55718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:46:08.378013   55718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0910 18:46:08.395253   55718 ssh_runner.go:195] Run: grep 192.168.72.99	control-plane.minikube.internal$ /etc/hosts
	I0910 18:46:08.399088   55718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:46:08.412185   55718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:46:08.542214   55718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:46:08.559744   55718 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043 for IP: 192.168.72.99
	I0910 18:46:08.559781   55718 certs.go:194] generating shared ca certs ...
	I0910 18:46:08.559800   55718 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:46:08.559961   55718 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:46:08.560025   55718 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:46:08.560043   55718 certs.go:256] generating profile certs ...
	I0910 18:46:08.560111   55718 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.key
	I0910 18:46:08.560142   55718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt with IP's: []
	I0910 18:46:08.684991   55718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt ...
	I0910 18:46:08.685018   55718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: {Name:mkd163adf3eeb78ea70e06c03b052f9804bdfc2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:46:08.685202   55718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.key ...
	I0910 18:46:08.685216   55718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.key: {Name:mk53a548c63eb9538338e14bf871b3dada1200bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:46:08.685299   55718 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.key.dbd048a2
	I0910 18:46:08.685313   55718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.crt.dbd048a2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.99]
	I0910 18:46:08.772634   55718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.crt.dbd048a2 ...
	I0910 18:46:08.772660   55718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.crt.dbd048a2: {Name:mk17e33b9f619df0625117dc1274e3cdc29973e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:46:08.772803   55718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.key.dbd048a2 ...
	I0910 18:46:08.772814   55718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.key.dbd048a2: {Name:mke923f482389765bd8837c2f7dd0969448ceed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:46:08.772883   55718 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.crt.dbd048a2 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.crt
	I0910 18:46:08.772948   55718 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.key.dbd048a2 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.key
	I0910 18:46:08.772997   55718 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/proxy-client.key
	I0910 18:46:08.773017   55718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/proxy-client.crt with IP's: []
	I0910 18:46:08.828617   55718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/proxy-client.crt ...
	I0910 18:46:08.828645   55718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/proxy-client.crt: {Name:mkcd297c6882a58da526b4aba7a9911a64b510de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:46:08.828786   55718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/proxy-client.key ...
	I0910 18:46:08.828796   55718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/proxy-client.key: {Name:mkbc180f96a1c58438a9723f58ad270a479d8dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:46:08.828951   55718 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:46:08.828982   55718 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:46:08.828992   55718 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:46:08.829013   55718 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:46:08.829040   55718 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:46:08.829062   55718 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:46:08.829115   55718 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:46:08.829720   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:46:08.854174   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:46:08.876660   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:46:08.899555   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:46:08.925115   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0910 18:46:08.948689   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:46:08.972516   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:46:08.996298   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:46:09.020715   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:46:09.043830   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:46:09.066616   55718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:46:09.089843   55718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:46:09.106207   55718 ssh_runner.go:195] Run: openssl version
	I0910 18:46:09.111960   55718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:46:09.123507   55718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:46:09.127836   55718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:46:09.127894   55718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:46:09.133573   55718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:46:09.144726   55718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:46:09.157440   55718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:46:09.161991   55718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:46:09.162042   55718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:46:09.167423   55718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:46:09.178846   55718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:46:09.190177   55718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:46:09.195195   55718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:46:09.195239   55718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:46:09.200867   55718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:46:09.212994   55718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:46:09.217348   55718 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:46:09.217428   55718 kubeadm.go:392] StartCluster: {Name:auto-642043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:auto-642043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.99 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:46:09.217528   55718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:46:09.217573   55718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:46:09.254135   55718 cri.go:89] found id: ""
	I0910 18:46:09.254227   55718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:46:09.268452   55718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:46:09.281368   55718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:46:09.291327   55718 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:46:09.291346   55718 kubeadm.go:157] found existing configuration files:
	
	I0910 18:46:09.291402   55718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:46:09.300180   55718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:46:09.300266   55718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:46:09.309914   55718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:46:09.318596   55718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:46:09.318657   55718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:46:09.327861   55718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:46:09.338257   55718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:46:09.338302   55718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:46:09.348174   55718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:46:09.357913   55718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:46:09.357976   55718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:46:09.369823   55718 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 18:46:09.437160   55718 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 18:46:09.437254   55718 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 18:46:09.550647   55718 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 18:46:09.550780   55718 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 18:46:09.550935   55718 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 18:46:09.562281   55718 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 18:46:09.669368   55718 out.go:235]   - Generating certificates and keys ...
	I0910 18:46:09.669500   55718 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 18:46:09.669611   55718 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 18:46:09.702900   55718 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 18:46:09.986635   55718 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 18:46:10.349889   56745 start.go:364] duration metric: took 8.978504863s to acquireMachinesLock for "kindnet-642043"
	I0910 18:46:10.349961   56745 start.go:93] Provisioning new machine with config: &{Name:kindnet-642043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:kindnet-642043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 18:46:10.350060   56745 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 18:46:10.425689   56745 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 18:46:10.425938   56745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:46:10.425995   56745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:46:10.441443   56745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0910 18:46:10.441877   56745 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:46:10.442491   56745 main.go:141] libmachine: Using API Version  1
	I0910 18:46:10.442525   56745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:46:10.442958   56745 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:46:10.443182   56745 main.go:141] libmachine: (kindnet-642043) Calling .GetMachineName
	I0910 18:46:10.443360   56745 main.go:141] libmachine: (kindnet-642043) Calling .DriverName
	I0910 18:46:10.443555   56745 start.go:159] libmachine.API.Create for "kindnet-642043" (driver="kvm2")
	I0910 18:46:10.443578   56745 client.go:168] LocalClient.Create starting
	I0910 18:46:10.443606   56745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 18:46:10.443638   56745 main.go:141] libmachine: Decoding PEM data...
	I0910 18:46:10.443655   56745 main.go:141] libmachine: Parsing certificate...
	I0910 18:46:10.443703   56745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 18:46:10.443721   56745 main.go:141] libmachine: Decoding PEM data...
	I0910 18:46:10.443734   56745 main.go:141] libmachine: Parsing certificate...
	I0910 18:46:10.443749   56745 main.go:141] libmachine: Running pre-create checks...
	I0910 18:46:10.443764   56745 main.go:141] libmachine: (kindnet-642043) Calling .PreCreateCheck
	I0910 18:46:10.444161   56745 main.go:141] libmachine: (kindnet-642043) Calling .GetConfigRaw
	I0910 18:46:10.444621   56745 main.go:141] libmachine: Creating machine...
	I0910 18:46:10.444638   56745 main.go:141] libmachine: (kindnet-642043) Calling .Create
	I0910 18:46:10.444773   56745 main.go:141] libmachine: (kindnet-642043) Creating KVM machine...
	I0910 18:46:10.446036   56745 main.go:141] libmachine: (kindnet-642043) DBG | found existing default KVM network
	I0910 18:46:10.447079   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:10.446943   56848 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e4:ea:60} reservation:<nil>}
	I0910 18:46:10.448491   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:10.448395   56848 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a4750}
	I0910 18:46:10.448521   56745 main.go:141] libmachine: (kindnet-642043) DBG | created network xml: 
	I0910 18:46:10.448536   56745 main.go:141] libmachine: (kindnet-642043) DBG | <network>
	I0910 18:46:10.448555   56745 main.go:141] libmachine: (kindnet-642043) DBG |   <name>mk-kindnet-642043</name>
	I0910 18:46:10.448567   56745 main.go:141] libmachine: (kindnet-642043) DBG |   <dns enable='no'/>
	I0910 18:46:10.448579   56745 main.go:141] libmachine: (kindnet-642043) DBG |   
	I0910 18:46:10.448596   56745 main.go:141] libmachine: (kindnet-642043) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0910 18:46:10.448609   56745 main.go:141] libmachine: (kindnet-642043) DBG |     <dhcp>
	I0910 18:46:10.448621   56745 main.go:141] libmachine: (kindnet-642043) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0910 18:46:10.448634   56745 main.go:141] libmachine: (kindnet-642043) DBG |     </dhcp>
	I0910 18:46:10.448643   56745 main.go:141] libmachine: (kindnet-642043) DBG |   </ip>
	I0910 18:46:10.448652   56745 main.go:141] libmachine: (kindnet-642043) DBG |   
	I0910 18:46:10.448669   56745 main.go:141] libmachine: (kindnet-642043) DBG | </network>
	I0910 18:46:10.448682   56745 main.go:141] libmachine: (kindnet-642043) DBG | 
	I0910 18:46:10.571419   56745 main.go:141] libmachine: (kindnet-642043) DBG | trying to create private KVM network mk-kindnet-642043 192.168.50.0/24...
	I0910 18:46:10.648889   56745 main.go:141] libmachine: (kindnet-642043) DBG | private KVM network mk-kindnet-642043 192.168.50.0/24 created
	I0910 18:46:10.648920   56745 main.go:141] libmachine: (kindnet-642043) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kindnet-642043 ...
	I0910 18:46:10.648947   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:10.648876   56848 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:46:10.648967   56745 main.go:141] libmachine: (kindnet-642043) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 18:46:10.649051   56745 main.go:141] libmachine: (kindnet-642043) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:46:10.917863   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:10.917732   56848 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kindnet-642043/id_rsa...
	I0910 18:46:11.129152   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:11.129000   56848 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kindnet-642043/kindnet-642043.rawdisk...
	I0910 18:46:11.129185   56745 main.go:141] libmachine: (kindnet-642043) DBG | Writing magic tar header
	I0910 18:46:11.129200   56745 main.go:141] libmachine: (kindnet-642043) DBG | Writing SSH key tar header
	I0910 18:46:11.129212   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:11.129168   56848 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kindnet-642043 ...
	I0910 18:46:11.129331   56745 main.go:141] libmachine: (kindnet-642043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kindnet-642043
	I0910 18:46:11.129354   56745 main.go:141] libmachine: (kindnet-642043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 18:46:11.129368   56745 main.go:141] libmachine: (kindnet-642043) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/kindnet-642043 (perms=drwx------)
	I0910 18:46:11.129383   56745 main.go:141] libmachine: (kindnet-642043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:46:11.129394   56745 main.go:141] libmachine: (kindnet-642043) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 18:46:11.129423   56745 main.go:141] libmachine: (kindnet-642043) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 18:46:11.129447   56745 main.go:141] libmachine: (kindnet-642043) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 18:46:11.129471   56745 main.go:141] libmachine: (kindnet-642043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 18:46:11.129493   56745 main.go:141] libmachine: (kindnet-642043) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 18:46:11.129506   56745 main.go:141] libmachine: (kindnet-642043) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 18:46:11.129519   56745 main.go:141] libmachine: (kindnet-642043) DBG | Checking permissions on dir: /home/jenkins
	I0910 18:46:11.129530   56745 main.go:141] libmachine: (kindnet-642043) DBG | Checking permissions on dir: /home
	I0910 18:46:11.129543   56745 main.go:141] libmachine: (kindnet-642043) DBG | Skipping /home - not owner
	I0910 18:46:11.129559   56745 main.go:141] libmachine: (kindnet-642043) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 18:46:11.129572   56745 main.go:141] libmachine: (kindnet-642043) Creating domain...
	I0910 18:46:11.130499   56745 main.go:141] libmachine: (kindnet-642043) define libvirt domain using xml: 
	I0910 18:46:11.130519   56745 main.go:141] libmachine: (kindnet-642043) <domain type='kvm'>
	I0910 18:46:11.130528   56745 main.go:141] libmachine: (kindnet-642043)   <name>kindnet-642043</name>
	I0910 18:46:11.130540   56745 main.go:141] libmachine: (kindnet-642043)   <memory unit='MiB'>3072</memory>
	I0910 18:46:11.130549   56745 main.go:141] libmachine: (kindnet-642043)   <vcpu>2</vcpu>
	I0910 18:46:11.130559   56745 main.go:141] libmachine: (kindnet-642043)   <features>
	I0910 18:46:11.130579   56745 main.go:141] libmachine: (kindnet-642043)     <acpi/>
	I0910 18:46:11.130590   56745 main.go:141] libmachine: (kindnet-642043)     <apic/>
	I0910 18:46:11.130606   56745 main.go:141] libmachine: (kindnet-642043)     <pae/>
	I0910 18:46:11.130620   56745 main.go:141] libmachine: (kindnet-642043)     
	I0910 18:46:11.130631   56745 main.go:141] libmachine: (kindnet-642043)   </features>
	I0910 18:46:11.130655   56745 main.go:141] libmachine: (kindnet-642043)   <cpu mode='host-passthrough'>
	I0910 18:46:11.130666   56745 main.go:141] libmachine: (kindnet-642043)   
	I0910 18:46:11.130671   56745 main.go:141] libmachine: (kindnet-642043)   </cpu>
	I0910 18:46:11.130676   56745 main.go:141] libmachine: (kindnet-642043)   <os>
	I0910 18:46:11.130681   56745 main.go:141] libmachine: (kindnet-642043)     <type>hvm</type>
	I0910 18:46:11.130685   56745 main.go:141] libmachine: (kindnet-642043)     <boot dev='cdrom'/>
	I0910 18:46:11.130689   56745 main.go:141] libmachine: (kindnet-642043)     <boot dev='hd'/>
	I0910 18:46:11.130695   56745 main.go:141] libmachine: (kindnet-642043)     <bootmenu enable='no'/>
	I0910 18:46:11.130698   56745 main.go:141] libmachine: (kindnet-642043)   </os>
	I0910 18:46:11.130703   56745 main.go:141] libmachine: (kindnet-642043)   <devices>
	I0910 18:46:11.130717   56745 main.go:141] libmachine: (kindnet-642043)     <disk type='file' device='cdrom'>
	I0910 18:46:11.130731   56745 main.go:141] libmachine: (kindnet-642043)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kindnet-642043/boot2docker.iso'/>
	I0910 18:46:11.130739   56745 main.go:141] libmachine: (kindnet-642043)       <target dev='hdc' bus='scsi'/>
	I0910 18:46:11.130747   56745 main.go:141] libmachine: (kindnet-642043)       <readonly/>
	I0910 18:46:11.130754   56745 main.go:141] libmachine: (kindnet-642043)     </disk>
	I0910 18:46:11.130763   56745 main.go:141] libmachine: (kindnet-642043)     <disk type='file' device='disk'>
	I0910 18:46:11.130773   56745 main.go:141] libmachine: (kindnet-642043)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 18:46:11.130785   56745 main.go:141] libmachine: (kindnet-642043)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kindnet-642043/kindnet-642043.rawdisk'/>
	I0910 18:46:11.130793   56745 main.go:141] libmachine: (kindnet-642043)       <target dev='hda' bus='virtio'/>
	I0910 18:46:11.130801   56745 main.go:141] libmachine: (kindnet-642043)     </disk>
	I0910 18:46:11.130812   56745 main.go:141] libmachine: (kindnet-642043)     <interface type='network'>
	I0910 18:46:11.130822   56745 main.go:141] libmachine: (kindnet-642043)       <source network='mk-kindnet-642043'/>
	I0910 18:46:11.130832   56745 main.go:141] libmachine: (kindnet-642043)       <model type='virtio'/>
	I0910 18:46:11.130844   56745 main.go:141] libmachine: (kindnet-642043)     </interface>
	I0910 18:46:11.130854   56745 main.go:141] libmachine: (kindnet-642043)     <interface type='network'>
	I0910 18:46:11.130863   56745 main.go:141] libmachine: (kindnet-642043)       <source network='default'/>
	I0910 18:46:11.130873   56745 main.go:141] libmachine: (kindnet-642043)       <model type='virtio'/>
	I0910 18:46:11.130881   56745 main.go:141] libmachine: (kindnet-642043)     </interface>
	I0910 18:46:11.130886   56745 main.go:141] libmachine: (kindnet-642043)     <serial type='pty'>
	I0910 18:46:11.130896   56745 main.go:141] libmachine: (kindnet-642043)       <target port='0'/>
	I0910 18:46:11.130907   56745 main.go:141] libmachine: (kindnet-642043)     </serial>
	I0910 18:46:11.130917   56745 main.go:141] libmachine: (kindnet-642043)     <console type='pty'>
	I0910 18:46:11.130928   56745 main.go:141] libmachine: (kindnet-642043)       <target type='serial' port='0'/>
	I0910 18:46:11.130939   56745 main.go:141] libmachine: (kindnet-642043)     </console>
	I0910 18:46:11.130949   56745 main.go:141] libmachine: (kindnet-642043)     <rng model='virtio'>
	I0910 18:46:11.130961   56745 main.go:141] libmachine: (kindnet-642043)       <backend model='random'>/dev/random</backend>
	I0910 18:46:11.130965   56745 main.go:141] libmachine: (kindnet-642043)     </rng>
	I0910 18:46:11.130970   56745 main.go:141] libmachine: (kindnet-642043)     
	I0910 18:46:11.130979   56745 main.go:141] libmachine: (kindnet-642043)     
	I0910 18:46:11.130988   56745 main.go:141] libmachine: (kindnet-642043)   </devices>
	I0910 18:46:11.130998   56745 main.go:141] libmachine: (kindnet-642043) </domain>
	I0910 18:46:11.131008   56745 main.go:141] libmachine: (kindnet-642043) 
	I0910 18:46:11.139722   56745 main.go:141] libmachine: (kindnet-642043) DBG | domain kindnet-642043 has defined MAC address 52:54:00:51:fa:b8 in network default
	I0910 18:46:11.140568   56745 main.go:141] libmachine: (kindnet-642043) Ensuring networks are active...
	I0910 18:46:11.140588   56745 main.go:141] libmachine: (kindnet-642043) DBG | domain kindnet-642043 has defined MAC address 52:54:00:3c:f4:98 in network mk-kindnet-642043
	I0910 18:46:11.141614   56745 main.go:141] libmachine: (kindnet-642043) Ensuring network default is active
	I0910 18:46:11.141974   56745 main.go:141] libmachine: (kindnet-642043) Ensuring network mk-kindnet-642043 is active
	I0910 18:46:11.142626   56745 main.go:141] libmachine: (kindnet-642043) Getting domain xml...
	I0910 18:46:11.143525   56745 main.go:141] libmachine: (kindnet-642043) Creating domain...
	I0910 18:46:10.120132   56007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:46:10.120158   56007 machine.go:96] duration metric: took 6.923689772s to provisionDockerMachine
	I0910 18:46:10.120190   56007 start.go:293] postStartSetup for "kubernetes-upgrade-192799" (driver="kvm2")
	I0910 18:46:10.120206   56007 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:46:10.120232   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:46:10.120617   56007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:46:10.120648   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:10.123320   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.123711   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:10.123736   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.123919   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:46:10.124136   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:10.124309   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:46:10.124440   56007 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa Username:docker}
	I0910 18:46:10.209062   56007 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:46:10.213457   56007 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:46:10.213484   56007 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:46:10.213553   56007 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:46:10.213660   56007 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:46:10.213784   56007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:46:10.224003   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:46:10.247004   56007 start.go:296] duration metric: took 126.77972ms for postStartSetup
	I0910 18:46:10.247042   56007 fix.go:56] duration metric: took 7.076731836s for fixHost
	I0910 18:46:10.247066   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:10.249723   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.250066   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:10.250098   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.250206   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:46:10.250381   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:10.250534   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:10.250697   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:46:10.250830   56007 main.go:141] libmachine: Using SSH client type: native
	I0910 18:46:10.250990   56007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0910 18:46:10.251000   56007 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:46:10.349764   56007 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993970.337245444
	
	I0910 18:46:10.349786   56007 fix.go:216] guest clock: 1725993970.337245444
	I0910 18:46:10.349796   56007 fix.go:229] Guest: 2024-09-10 18:46:10.337245444 +0000 UTC Remote: 2024-09-10 18:46:10.247046565 +0000 UTC m=+62.460536061 (delta=90.198879ms)
	I0910 18:46:10.349821   56007 fix.go:200] guest clock delta is within tolerance: 90.198879ms
	I0910 18:46:10.349830   56007 start.go:83] releasing machines lock for "kubernetes-upgrade-192799", held for 7.179546799s
	I0910 18:46:10.349857   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:46:10.350123   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetIP
	I0910 18:46:10.352676   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.352991   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:10.353021   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.353237   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:46:10.353736   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:46:10.353905   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .DriverName
	I0910 18:46:10.354011   56007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:46:10.354057   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:10.354121   56007 ssh_runner.go:195] Run: cat /version.json
	I0910 18:46:10.354146   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHHostname
	I0910 18:46:10.356762   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.356972   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.357175   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:10.357197   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.357371   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:10.357401   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:10.357431   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:46:10.357536   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHPort
	I0910 18:46:10.357631   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:10.357704   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHKeyPath
	I0910 18:46:10.357776   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:46:10.357857   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetSSHUsername
	I0910 18:46:10.358054   56007 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa Username:docker}
	I0910 18:46:10.358057   56007 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/kubernetes-upgrade-192799/id_rsa Username:docker}
	I0910 18:46:10.457423   56007 ssh_runner.go:195] Run: systemctl --version
	I0910 18:46:10.464872   56007 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:46:10.623512   56007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:46:10.630026   56007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:46:10.630081   56007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:46:10.641935   56007 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 18:46:10.641966   56007 start.go:495] detecting cgroup driver to use...
	I0910 18:46:10.642027   56007 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:46:10.666798   56007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:46:10.682429   56007 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:46:10.682494   56007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:46:10.700621   56007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:46:10.717264   56007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:46:10.902047   56007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:46:11.053905   56007 docker.go:233] disabling docker service ...
	I0910 18:46:11.053971   56007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:46:11.075074   56007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:46:11.093184   56007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:46:11.247919   56007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:46:11.404442   56007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:46:11.419383   56007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:46:11.441620   56007 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:46:11.441680   56007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:11.452368   56007 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:46:11.452427   56007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:11.463093   56007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:11.473211   56007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:11.483370   56007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:46:11.494476   56007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:11.504630   56007 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:11.517836   56007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:46:11.527729   56007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:46:11.540611   56007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:46:11.552614   56007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:46:11.727420   56007 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:46:12.121551   56007 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:46:12.121620   56007 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:46:12.126867   56007 start.go:563] Will wait 60s for crictl version
	I0910 18:46:12.126923   56007 ssh_runner.go:195] Run: which crictl
	I0910 18:46:12.130947   56007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:46:12.170704   56007 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:46:12.170790   56007 ssh_runner.go:195] Run: crio --version
	I0910 18:46:12.204017   56007 ssh_runner.go:195] Run: crio --version
	I0910 18:46:12.238036   56007 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:46:10.268624   55718 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 18:46:10.438955   55718 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 18:46:10.655888   55718 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 18:46:10.656072   55718 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-642043 localhost] and IPs [192.168.72.99 127.0.0.1 ::1]
	I0910 18:46:11.034541   55718 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 18:46:11.034928   55718 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-642043 localhost] and IPs [192.168.72.99 127.0.0.1 ::1]
	I0910 18:46:11.154309   55718 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 18:46:11.305028   55718 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 18:46:11.566792   55718 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 18:46:11.566862   55718 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 18:46:11.930677   55718 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 18:46:12.039634   55718 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 18:46:12.269896   55718 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 18:46:12.617269   55718 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 18:46:12.711554   55718 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 18:46:12.712225   55718 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 18:46:12.718245   55718 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 18:46:12.239117   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) Calling .GetIP
	I0910 18:46:12.242189   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:12.242707   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:d1:04", ip: ""} in network mk-kubernetes-upgrade-192799: {Iface:virbr1 ExpiryTime:2024-09-10 19:39:22 +0000 UTC Type:0 Mac:52:54:00:2a:d1:04 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:kubernetes-upgrade-192799 Clientid:01:52:54:00:2a:d1:04}
	I0910 18:46:12.242758   56007 main.go:141] libmachine: (kubernetes-upgrade-192799) DBG | domain kubernetes-upgrade-192799 has defined IP address 192.168.39.145 and MAC address 52:54:00:2a:d1:04 in network mk-kubernetes-upgrade-192799
	I0910 18:46:12.243005   56007 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 18:46:12.247613   56007 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-192799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-192799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:46:12.247750   56007 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:46:12.247809   56007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:46:12.299424   56007 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:46:12.299446   56007 crio.go:433] Images already preloaded, skipping extraction
	I0910 18:46:12.299490   56007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:46:12.341437   56007 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:46:12.341479   56007 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:46:12.341490   56007 kubeadm.go:934] updating node { 192.168.39.145 8443 v1.31.0 crio true true} ...
	I0910 18:46:12.341652   56007 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-192799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-192799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:46:12.341761   56007 ssh_runner.go:195] Run: crio config
	I0910 18:46:12.398595   56007 cni.go:84] Creating CNI manager for ""
	I0910 18:46:12.398626   56007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:46:12.398643   56007 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:46:12.398671   56007 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-192799 NodeName:kubernetes-upgrade-192799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:46:12.398856   56007 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-192799"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:46:12.398924   56007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:46:12.411182   56007 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:46:12.411269   56007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:46:12.421615   56007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0910 18:46:12.443166   56007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:46:12.464322   56007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0910 18:46:12.485500   56007 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0910 18:46:12.490578   56007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:46:12.643911   56007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:46:12.661658   56007 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799 for IP: 192.168.39.145
	I0910 18:46:12.661710   56007 certs.go:194] generating shared ca certs ...
	I0910 18:46:12.661731   56007 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:46:12.661915   56007 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:46:12.661979   56007 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:46:12.661993   56007 certs.go:256] generating profile certs ...
	I0910 18:46:12.662097   56007 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/client.key
	I0910 18:46:12.662159   56007 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.key.4e17f7c7
	I0910 18:46:12.662211   56007 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.key
	I0910 18:46:12.662358   56007 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:46:12.662401   56007 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:46:12.662416   56007 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:46:12.662456   56007 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:46:12.662488   56007 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:46:12.662526   56007 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:46:12.662584   56007 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:46:12.663425   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:46:12.697518   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:46:12.723746   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:46:12.755672   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:46:12.787895   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0910 18:46:12.726009   55718 out.go:235]   - Booting up control plane ...
	I0910 18:46:12.726154   55718 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 18:46:12.726267   55718 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 18:46:12.726364   55718 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 18:46:12.745197   55718 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 18:46:12.756658   55718 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 18:46:12.756718   55718 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 18:46:12.903807   55718 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 18:46:12.903964   55718 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 18:46:13.407732   55718 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.773108ms
	I0910 18:46:13.407836   55718 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 18:46:12.460058   56745 main.go:141] libmachine: (kindnet-642043) Waiting to get IP...
	I0910 18:46:12.460739   56745 main.go:141] libmachine: (kindnet-642043) DBG | domain kindnet-642043 has defined MAC address 52:54:00:3c:f4:98 in network mk-kindnet-642043
	I0910 18:46:12.461144   56745 main.go:141] libmachine: (kindnet-642043) DBG | unable to find current IP address of domain kindnet-642043 in network mk-kindnet-642043
	I0910 18:46:12.461170   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:12.461130   56848 retry.go:31] will retry after 273.648323ms: waiting for machine to come up
	I0910 18:46:12.736761   56745 main.go:141] libmachine: (kindnet-642043) DBG | domain kindnet-642043 has defined MAC address 52:54:00:3c:f4:98 in network mk-kindnet-642043
	I0910 18:46:12.737423   56745 main.go:141] libmachine: (kindnet-642043) DBG | unable to find current IP address of domain kindnet-642043 in network mk-kindnet-642043
	I0910 18:46:12.737446   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:12.737395   56848 retry.go:31] will retry after 357.167079ms: waiting for machine to come up
	I0910 18:46:13.096029   56745 main.go:141] libmachine: (kindnet-642043) DBG | domain kindnet-642043 has defined MAC address 52:54:00:3c:f4:98 in network mk-kindnet-642043
	I0910 18:46:13.096669   56745 main.go:141] libmachine: (kindnet-642043) DBG | unable to find current IP address of domain kindnet-642043 in network mk-kindnet-642043
	I0910 18:46:13.096693   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:13.096619   56848 retry.go:31] will retry after 363.99517ms: waiting for machine to come up
	I0910 18:46:13.461975   56745 main.go:141] libmachine: (kindnet-642043) DBG | domain kindnet-642043 has defined MAC address 52:54:00:3c:f4:98 in network mk-kindnet-642043
	I0910 18:46:13.462500   56745 main.go:141] libmachine: (kindnet-642043) DBG | unable to find current IP address of domain kindnet-642043 in network mk-kindnet-642043
	I0910 18:46:13.462527   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:13.462452   56848 retry.go:31] will retry after 493.923419ms: waiting for machine to come up
	I0910 18:46:13.958245   56745 main.go:141] libmachine: (kindnet-642043) DBG | domain kindnet-642043 has defined MAC address 52:54:00:3c:f4:98 in network mk-kindnet-642043
	I0910 18:46:13.958730   56745 main.go:141] libmachine: (kindnet-642043) DBG | unable to find current IP address of domain kindnet-642043 in network mk-kindnet-642043
	I0910 18:46:13.958757   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:13.958679   56848 retry.go:31] will retry after 674.27564ms: waiting for machine to come up
	I0910 18:46:14.634368   56745 main.go:141] libmachine: (kindnet-642043) DBG | domain kindnet-642043 has defined MAC address 52:54:00:3c:f4:98 in network mk-kindnet-642043
	I0910 18:46:14.634984   56745 main.go:141] libmachine: (kindnet-642043) DBG | unable to find current IP address of domain kindnet-642043 in network mk-kindnet-642043
	I0910 18:46:14.635022   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:14.634960   56848 retry.go:31] will retry after 939.294712ms: waiting for machine to come up
	I0910 18:46:15.576433   56745 main.go:141] libmachine: (kindnet-642043) DBG | domain kindnet-642043 has defined MAC address 52:54:00:3c:f4:98 in network mk-kindnet-642043
	I0910 18:46:15.576918   56745 main.go:141] libmachine: (kindnet-642043) DBG | unable to find current IP address of domain kindnet-642043 in network mk-kindnet-642043
	I0910 18:46:15.576945   56745 main.go:141] libmachine: (kindnet-642043) DBG | I0910 18:46:15.576884   56848 retry.go:31] will retry after 955.415718ms: waiting for machine to come up
	I0910 18:46:12.820092   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:46:12.846474   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:46:12.874436   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kubernetes-upgrade-192799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:46:12.944919   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:46:13.042072   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:46:13.103635   56007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:46:13.168514   56007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:46:13.204207   56007 ssh_runner.go:195] Run: openssl version
	I0910 18:46:13.212190   56007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:46:13.235269   56007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:46:13.241045   56007 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:46:13.241121   56007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:46:13.277892   56007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:46:13.325592   56007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:46:13.495848   56007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:46:13.562310   56007 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:46:13.562386   56007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:46:13.606936   56007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:46:13.645121   56007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:46:13.791020   56007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:46:13.855205   56007 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:46:13.855274   56007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:46:13.873714   56007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:46:13.939306   56007 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:46:13.962515   56007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:46:14.046248   56007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:46:14.170134   56007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:46:14.227159   56007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:46:14.244170   56007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:46:14.295739   56007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:46:14.346502   56007 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-192799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-192799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:46:14.346632   56007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:46:14.346695   56007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:46:14.602375   56007 cri.go:89] found id: "3cf3269b7080ecb8a2dd56351f96b8d435cb1f7820635c5178a8b49324d0a826"
	I0910 18:46:14.602406   56007 cri.go:89] found id: "bdee655098222297e82d91aa6cb33152325e144dd5ab6ba9db1ab14052a91bf2"
	I0910 18:46:14.602414   56007 cri.go:89] found id: "3961cf85dbb735a4899f9a67c916992c41a359d2aa09979ad6ed87d6aa6a29b3"
	I0910 18:46:14.602421   56007 cri.go:89] found id: "095d4d5c5a9716c082af526256cae6715b2efbeb13d9e1690ebaf771a53cf711"
	I0910 18:46:14.602426   56007 cri.go:89] found id: "119ffd076025ce6945c5cd3cbad2f0ae89f673daf7b141c31adec41da77a52c1"
	I0910 18:46:14.602432   56007 cri.go:89] found id: "d8426e19ac77455d95e36174f7592ac4d2405b87485e7e148c3d2b28e3940914"
	I0910 18:46:14.602438   56007 cri.go:89] found id: "4c46778b7778913472d66927a027d2a97b84b97f084adf5b73a4e6e12fc357c4"
	I0910 18:46:14.602443   56007 cri.go:89] found id: "983e9a9d3564673d60be9b167b3ea924686bc8b9ef3c0ddaed620f8ea76e1f24"
	I0910 18:46:14.602450   56007 cri.go:89] found id: "c33c9c586c6a5937d2578c10be537cb7fa1d36fb29df2727c94c2ef4e2b26bb7"
	I0910 18:46:14.602460   56007 cri.go:89] found id: "8d37b20df5a7d2dd11d9b53ca650167b413e58829b2f143a7863d75195e37545"
	I0910 18:46:14.602466   56007 cri.go:89] found id: "f8901928acea8d2505c7565cf3395a0e6bb1a478d5700f5744cc1c7c6a8c9538"
	I0910 18:46:14.602472   56007 cri.go:89] found id: "50d2e81329edcc278b6e992d704cffffb3cacfc2140356cf724752680f84be45"
	I0910 18:46:14.602495   56007 cri.go:89] found id: "fc666a420635f5ff89a4c339053701789fc27731a41432517d8879d8cfb39a3b"
	I0910 18:46:14.602501   56007 cri.go:89] found id: "9e356d69cc1eab9e95595392fbc1143ddd2ed428638261eecd1edefef29af859"
	I0910 18:46:14.602508   56007 cri.go:89] found id: ""
	I0910 18:46:14.602562   56007 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.506779054Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=002375d6-0039-4875-9ab8-7ab09dc1dba3 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.509315311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c97bd79-52f3-4c6e-8861-9009ebf9208a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.509987083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993997509956574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c97bd79-52f3-4c6e-8861-9009ebf9208a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.510866621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=211e550f-f4c4-430e-94c3-b54f8ac1baa6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.511290503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=211e550f-f4c4-430e-94c3-b54f8ac1baa6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.512810737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe57e9b94054144de4c7735978012e930d3ff08727e74faf1937c449cf3a9788,PodSandboxId:e53f1c0c45688b6347a50c810d8bd3e2200aefb9a866cdde6ca2fdc7191f8243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993993857364277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e68bd70-3dc2-4c8f-818d-0859d202738b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edbefa9b70e3357ebd1e986566c217bbe2c41a69bbf6a1d4e4ed957d493cc98,PodSandboxId:3f4c5bf70475b3ca3f56b5afb4859959b4e7a846419d6124209754bda9d8a773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725993993864428805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c36de5-52a2-44e9-9159-a6a3963562d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af7cc5356b9d835b428d54cd36c5e430fbb1766645f589bcb0c8baa0d2fecc,PodSandboxId:7f6e644265c579f1caa0dd98f2013f3df52336544cb063060ae2ce77b1c5041b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993990080007355,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbacd3d6eed48eece5d493fdfcf5c160,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082fc08955c141da1b4fb10abbfd8176e0ee2aedc01e90e061ce744614f48233,PodSandboxId:04a632a1e00e38da73c704734abf60ce3d3f7575f74afa528ef91382d2dabb9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993990048209953,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa45888fac5573c47ae79f638a951a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe68e32301796feff0fdc4d3b92c9f7ed718992f3565edce400b7e89c8bf3e06,PodSandboxId:9c179cee5ea162cd85722228ebd0de3d64450d2c572045391c717e1d64c1ea32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993990055952407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37818751bb2fa4540a2e8b87355d2631,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9ba869e0cab811ef95447ef937a384e3edfe9d530e24de9d396bd44b223391,PodSandboxId:5485db014bfa9d4a7436dda6ba740d4b3ab463120297795771d4082317f6949c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993990033330259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d980d98d4bd0fbfecb4cfc3d031d86,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cf32c4d610fdb1b72852c31408218c59638672664ef5916097b331ce3756b8,PodSandboxId:be4db8336dbc32962ef9d7473722bc0d9eec15565c375772c70bd368c1c2191a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993975010283012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g2hp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7afe4c51-299a-42f2-aac4-9875ff619516,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8146b62c81eb1e4eab1e89ca915cd346b48876633ef7f8eae459d3b886de3b43,PodSandboxId:6f03b3db91bc611453499c8bf72588cf12cc7d4f8c28041f09af6b296fb79a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993974880411652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5xscn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23617cb-0321-4820-bb7e-395394d09cdb,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf3269b7080ecb8a2dd56351f96b8d435cb1f7820635c5178a8b49324d0a826,PodSandboxId:3f4c5bf70475b3ca3f56b5afb4859959b4e7a846419d6124209754bda9d8a773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725993973911460466,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c36de5-52a2-44e9-9159-a6a3963562d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdee655098222297e82d91aa6cb33152325e144dd5ab6ba9db1ab14052a91bf2,PodSandboxId:e53f1c0c45688b6347a50c810d8bd3e2200aefb9a866cdde6ca2fdc7191f8243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993973770067293,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e68bd70-3dc2-4c8f-818d-0859d202738b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3961cf85dbb735a4899f9a67c916992c41a359d2aa09979ad6ed87d6aa6a29b3,PodSandboxId:5485db014bfa9d4a7436dda6ba740d4b3ab463120297795771d4082317f6949c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993973666929103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d980d98d4bd0fbfecb4cfc3d031d86,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095d4d5c5a9716c082af526256cae6715b2efbeb13d9e1690ebaf771a53cf711,PodSandboxId:9c179cee5ea162cd85722228ebd0de3d64450d2c572045391c717e1d64c1ea32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993973594095355,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37818751bb2fa4540a2e8b87355d2631,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:119ffd076025ce6945c5cd3cbad2f0ae89f673daf7b141c31adec41da77a52c1,PodSandboxId:7f6e644265c579f1caa0dd98f2013f3df52336544cb063060ae2ce77b1c5041b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993973534125044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbacd3d6eed48eece5d493fdfcf5c160,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8426e19ac77455d95e36174f7592ac4d2405b87485e7e148c3d2b28e3940914,PodSandboxId:04a632a1e00e38da73c704734abf60ce3d3f7575f74afa528ef91382d2dabb9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993973307795433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name:
kube-apiserver-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa45888fac5573c47ae79f638a951a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983e9a9d3564673d60be9b167b3ea924686bc8b9ef3c0ddaed620f8ea76e1f24,PodSandboxId:285eb066f986f3036eb5231a96acf437ff7e06f6cbe86ed52665a9a53bbb0318,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993913053106871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g
2hp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7afe4c51-299a-42f2-aac4-9875ff619516,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33c9c586c6a5937d2578c10be537cb7fa1d36fb29df2727c94c2ef4e2b26bb7,PodSandboxId:aabeaa80bdda46ad148304d4618f447fd56ddc58cb430c14dc05e4567196f861,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993913000156130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5xscn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23617cb-0321-4820-bb7e-395394d09cdb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=211e550f-f4c4-430e-94c3-b54f8ac1baa6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.570343048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=307425a0-bb0f-456b-bc63-6809adb391ee name=/runtime.v1.RuntimeService/Version
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.570416114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=307425a0-bb0f-456b-bc63-6809adb391ee name=/runtime.v1.RuntimeService/Version
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.571363441Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac1db51e-6b95-4ffc-83ad-b8c38ba1bcac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.571839115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993997571814990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac1db51e-6b95-4ffc-83ad-b8c38ba1bcac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.572278557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c5bd09e-e948-4405-8ef7-fd9a94096298 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.572335365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c5bd09e-e948-4405-8ef7-fd9a94096298 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.572731411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe57e9b94054144de4c7735978012e930d3ff08727e74faf1937c449cf3a9788,PodSandboxId:e53f1c0c45688b6347a50c810d8bd3e2200aefb9a866cdde6ca2fdc7191f8243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993993857364277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e68bd70-3dc2-4c8f-818d-0859d202738b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edbefa9b70e3357ebd1e986566c217bbe2c41a69bbf6a1d4e4ed957d493cc98,PodSandboxId:3f4c5bf70475b3ca3f56b5afb4859959b4e7a846419d6124209754bda9d8a773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725993993864428805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c36de5-52a2-44e9-9159-a6a3963562d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af7cc5356b9d835b428d54cd36c5e430fbb1766645f589bcb0c8baa0d2fecc,PodSandboxId:7f6e644265c579f1caa0dd98f2013f3df52336544cb063060ae2ce77b1c5041b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993990080007355,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbacd3d6eed48eece5d493fdfcf5c160,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082fc08955c141da1b4fb10abbfd8176e0ee2aedc01e90e061ce744614f48233,PodSandboxId:04a632a1e00e38da73c704734abf60ce3d3f7575f74afa528ef91382d2dabb9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993990048209953,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa45888fac5573c47ae79f638a951a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe68e32301796feff0fdc4d3b92c9f7ed718992f3565edce400b7e89c8bf3e06,PodSandboxId:9c179cee5ea162cd85722228ebd0de3d64450d2c572045391c717e1d64c1ea32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993990055952407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37818751bb2fa4540a2e8b87355d2631,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9ba869e0cab811ef95447ef937a384e3edfe9d530e24de9d396bd44b223391,PodSandboxId:5485db014bfa9d4a7436dda6ba740d4b3ab463120297795771d4082317f6949c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993990033330259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d980d98d4bd0fbfecb4cfc3d031d86,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cf32c4d610fdb1b72852c31408218c59638672664ef5916097b331ce3756b8,PodSandboxId:be4db8336dbc32962ef9d7473722bc0d9eec15565c375772c70bd368c1c2191a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993975010283012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g2hp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7afe4c51-299a-42f2-aac4-9875ff619516,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8146b62c81eb1e4eab1e89ca915cd346b48876633ef7f8eae459d3b886de3b43,PodSandboxId:6f03b3db91bc611453499c8bf72588cf12cc7d4f8c28041f09af6b296fb79a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993974880411652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5xscn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23617cb-0321-4820-bb7e-395394d09cdb,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf3269b7080ecb8a2dd56351f96b8d435cb1f7820635c5178a8b49324d0a826,PodSandboxId:3f4c5bf70475b3ca3f56b5afb4859959b4e7a846419d6124209754bda9d8a773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725993973911460466,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c36de5-52a2-44e9-9159-a6a3963562d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdee655098222297e82d91aa6cb33152325e144dd5ab6ba9db1ab14052a91bf2,PodSandboxId:e53f1c0c45688b6347a50c810d8bd3e2200aefb9a866cdde6ca2fdc7191f8243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993973770067293,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e68bd70-3dc2-4c8f-818d-0859d202738b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3961cf85dbb735a4899f9a67c916992c41a359d2aa09979ad6ed87d6aa6a29b3,PodSandboxId:5485db014bfa9d4a7436dda6ba740d4b3ab463120297795771d4082317f6949c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993973666929103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d980d98d4bd0fbfecb4cfc3d031d86,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095d4d5c5a9716c082af526256cae6715b2efbeb13d9e1690ebaf771a53cf711,PodSandboxId:9c179cee5ea162cd85722228ebd0de3d64450d2c572045391c717e1d64c1ea32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993973594095355,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37818751bb2fa4540a2e8b87355d2631,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:119ffd076025ce6945c5cd3cbad2f0ae89f673daf7b141c31adec41da77a52c1,PodSandboxId:7f6e644265c579f1caa0dd98f2013f3df52336544cb063060ae2ce77b1c5041b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993973534125044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbacd3d6eed48eece5d493fdfcf5c160,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8426e19ac77455d95e36174f7592ac4d2405b87485e7e148c3d2b28e3940914,PodSandboxId:04a632a1e00e38da73c704734abf60ce3d3f7575f74afa528ef91382d2dabb9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993973307795433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name:
kube-apiserver-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa45888fac5573c47ae79f638a951a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983e9a9d3564673d60be9b167b3ea924686bc8b9ef3c0ddaed620f8ea76e1f24,PodSandboxId:285eb066f986f3036eb5231a96acf437ff7e06f6cbe86ed52665a9a53bbb0318,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993913053106871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g
2hp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7afe4c51-299a-42f2-aac4-9875ff619516,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33c9c586c6a5937d2578c10be537cb7fa1d36fb29df2727c94c2ef4e2b26bb7,PodSandboxId:aabeaa80bdda46ad148304d4618f447fd56ddc58cb430c14dc05e4567196f861,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993913000156130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5xscn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23617cb-0321-4820-bb7e-395394d09cdb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c5bd09e-e948-4405-8ef7-fd9a94096298 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.610886262Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94ecf814-bce8-48cb-ae0c-e26b2c200a4b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.611147261Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6f03b3db91bc611453499c8bf72588cf12cc7d4f8c28041f09af6b296fb79a7a,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-5xscn,Uid:b23617cb-0321-4820-bb7e-395394d09cdb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725993973280912661,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-5xscn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23617cb-0321-4820-bb7e-395394d09cdb,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T18:45:12.366719486Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be4db8336dbc32962ef9d7473722bc0d9eec15565c375772c70bd368c1c2191a,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-g2hp8,Uid:7afe4c51-299a-42f2-aac4-9875ff619516,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725993973277241767,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-g2hp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7afe4c51-299a-42f2-aac4-9875ff619516,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T18:45:12.353235125Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f4c5bf70475b3ca3f56b5afb4859959b4e7a846419d6124209754bda9d8a773,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e7c36de5-52a2-44e9-9159-a6a3963562d9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725993973268285262,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c36de5-52a2-44e9-9159-a6a3963562d9,},An
notations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-10T18:45:11.457128431Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e53f1c0c45688b6347a50c810d8bd3e2200aefb9a866cdde6ca2fdc7191f8243,Metadata:&PodSandboxMetadata{Name:kube-proxy-jmk4d,Uid:9e68bd70-3dc2-4c8f-818d-0859d202738b,N
amespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725993973258961621,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jmk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e68bd70-3dc2-4c8f-818d-0859d202738b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-10T18:45:12.208325585Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5485db014bfa9d4a7436dda6ba740d4b3ab463120297795771d4082317f6949c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-192799,Uid:a2d980d98d4bd0fbfecb4cfc3d031d86,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725993973215315311,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d980d98d4bd0fbfecb4cf
c3d031d86,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a2d980d98d4bd0fbfecb4cfc3d031d86,kubernetes.io/config.seen: 2024-09-10T18:44:58.517262634Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9c179cee5ea162cd85722228ebd0de3d64450d2c572045391c717e1d64c1ea32,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-192799,Uid:37818751bb2fa4540a2e8b87355d2631,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725993973183724126,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37818751bb2fa4540a2e8b87355d2631,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 37818751bb2fa4540a2e8b87355d2631,kubernetes.io/config.seen: 2024-09-10T18:44:58.517261319Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7
f6e644265c579f1caa0dd98f2013f3df52336544cb063060ae2ce77b1c5041b,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-192799,Uid:bbacd3d6eed48eece5d493fdfcf5c160,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725993973130666330,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbacd3d6eed48eece5d493fdfcf5c160,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.145:2379,kubernetes.io/config.hash: bbacd3d6eed48eece5d493fdfcf5c160,kubernetes.io/config.seen: 2024-09-10T18:44:58.575291135Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04a632a1e00e38da73c704734abf60ce3d3f7575f74afa528ef91382d2dabb9e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-192799,Uid:9fa45888fac5573c47ae79f638a951a3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY
,CreatedAt:1725993973043049787,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa45888fac5573c47ae79f638a951a3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.145:8443,kubernetes.io/config.hash: 9fa45888fac5573c47ae79f638a951a3,kubernetes.io/config.seen: 2024-09-10T18:44:58.517241874Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=94ecf814-bce8-48cb-ae0c-e26b2c200a4b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.611967515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8c360f5-ca4c-4de6-8519-88ff20d245c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.612047070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8c360f5-ca4c-4de6-8519-88ff20d245c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.612230837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe57e9b94054144de4c7735978012e930d3ff08727e74faf1937c449cf3a9788,PodSandboxId:e53f1c0c45688b6347a50c810d8bd3e2200aefb9a866cdde6ca2fdc7191f8243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993993857364277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e68bd70-3dc2-4c8f-818d-0859d202738b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edbefa9b70e3357ebd1e986566c217bbe2c41a69bbf6a1d4e4ed957d493cc98,PodSandboxId:3f4c5bf70475b3ca3f56b5afb4859959b4e7a846419d6124209754bda9d8a773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725993993864428805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c36de5-52a2-44e9-9159-a6a3963562d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af7cc5356b9d835b428d54cd36c5e430fbb1766645f589bcb0c8baa0d2fecc,PodSandboxId:7f6e644265c579f1caa0dd98f2013f3df52336544cb063060ae2ce77b1c5041b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993990080007355,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbacd3d6eed48eece5d493fdfcf5c160,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082fc08955c141da1b4fb10abbfd8176e0ee2aedc01e90e061ce744614f48233,PodSandboxId:04a632a1e00e38da73c704734abf60ce3d3f7575f74afa528ef91382d2dabb9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993990048209953,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa45888fac5573c47ae79f638a951a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe68e32301796feff0fdc4d3b92c9f7ed718992f3565edce400b7e89c8bf3e06,PodSandboxId:9c179cee5ea162cd85722228ebd0de3d64450d2c572045391c717e1d64c1ea32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993990055952407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37818751bb2fa4540a2e8b87355d2631,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9ba869e0cab811ef95447ef937a384e3edfe9d530e24de9d396bd44b223391,PodSandboxId:5485db014bfa9d4a7436dda6ba740d4b3ab463120297795771d4082317f6949c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993990033330259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d980d98d4bd0fbfecb4cfc3d031d86,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cf32c4d610fdb1b72852c31408218c59638672664ef5916097b331ce3756b8,PodSandboxId:be4db8336dbc32962ef9d7473722bc0d9eec15565c375772c70bd368c1c2191a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993975010283012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g2hp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7afe4c51-299a-42f2-aac4-9875ff619516,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8146b62c81eb1e4eab1e89ca915cd346b48876633ef7f8eae459d3b886de3b43,PodSandboxId:6f03b3db91bc611453499c8bf72588cf12cc7d4f8c28041f09af6b296fb79a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993974880411652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5xscn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23617cb-0321-4820-bb7e-395394d09cdb,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8c360f5-ca4c-4de6-8519-88ff20d245c8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.626704551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e478589-5414-41a5-ac3b-8b94a67cbaa0 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.626783712Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e478589-5414-41a5-ac3b-8b94a67cbaa0 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.628590256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9892f613-b72a-49d2-80c1-c2a0785e5f3c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.629380170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993997629359201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9892f613-b72a-49d2-80c1-c2a0785e5f3c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.630116938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=584a1aba-ee26-4581-845a-35c12399e1ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.630193824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=584a1aba-ee26-4581-845a-35c12399e1ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:46:37 kubernetes-upgrade-192799 crio[2262]: time="2024-09-10 18:46:37.630910543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe57e9b94054144de4c7735978012e930d3ff08727e74faf1937c449cf3a9788,PodSandboxId:e53f1c0c45688b6347a50c810d8bd3e2200aefb9a866cdde6ca2fdc7191f8243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993993857364277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e68bd70-3dc2-4c8f-818d-0859d202738b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edbefa9b70e3357ebd1e986566c217bbe2c41a69bbf6a1d4e4ed957d493cc98,PodSandboxId:3f4c5bf70475b3ca3f56b5afb4859959b4e7a846419d6124209754bda9d8a773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725993993864428805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c36de5-52a2-44e9-9159-a6a3963562d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77af7cc5356b9d835b428d54cd36c5e430fbb1766645f589bcb0c8baa0d2fecc,PodSandboxId:7f6e644265c579f1caa0dd98f2013f3df52336544cb063060ae2ce77b1c5041b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993990080007355,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbacd3d6eed48eece5d493fdfcf5c160,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082fc08955c141da1b4fb10abbfd8176e0ee2aedc01e90e061ce744614f48233,PodSandboxId:04a632a1e00e38da73c704734abf60ce3d3f7575f74afa528ef91382d2dabb9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993990048209953,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa45888fac5573c47ae79f638a951a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe68e32301796feff0fdc4d3b92c9f7ed718992f3565edce400b7e89c8bf3e06,PodSandboxId:9c179cee5ea162cd85722228ebd0de3d64450d2c572045391c717e1d64c1ea32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993990055952407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37818751bb2fa4540a2e8b87355d2631,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9ba869e0cab811ef95447ef937a384e3edfe9d530e24de9d396bd44b223391,PodSandboxId:5485db014bfa9d4a7436dda6ba740d4b3ab463120297795771d4082317f6949c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993990033330259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d980d98d4bd0fbfecb4cfc3d031d86,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cf32c4d610fdb1b72852c31408218c59638672664ef5916097b331ce3756b8,PodSandboxId:be4db8336dbc32962ef9d7473722bc0d9eec15565c375772c70bd368c1c2191a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993975010283012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g2hp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7afe4c51-299a-42f2-aac4-9875ff619516,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8146b62c81eb1e4eab1e89ca915cd346b48876633ef7f8eae459d3b886de3b43,PodSandboxId:6f03b3db91bc611453499c8bf72588cf12cc7d4f8c28041f09af6b296fb79a7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993974880411652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5xscn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23617cb-0321-4820-bb7e-395394d09cdb,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf3269b7080ecb8a2dd56351f96b8d435cb1f7820635c5178a8b49324d0a826,PodSandboxId:3f4c5bf70475b3ca3f56b5afb4859959b4e7a846419d6124209754bda9d8a773,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725993973911460466,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c36de5-52a2-44e9-9159-a6a3963562d9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdee655098222297e82d91aa6cb33152325e144dd5ab6ba9db1ab14052a91bf2,PodSandboxId:e53f1c0c45688b6347a50c810d8bd3e2200aefb9a866cdde6ca2fdc7191f8243,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993973770067293,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e68bd70-3dc2-4c8f-818d-0859d202738b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3961cf85dbb735a4899f9a67c916992c41a359d2aa09979ad6ed87d6aa6a29b3,PodSandboxId:5485db014bfa9d4a7436dda6ba740d4b3ab463120297795771d4082317f6949c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993973666929103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d980d98d4bd0fbfecb4cfc3d031d86,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095d4d5c5a9716c082af526256cae6715b2efbeb13d9e1690ebaf771a53cf711,PodSandboxId:9c179cee5ea162cd85722228ebd0de3d64450d2c572045391c717e1d64c1ea32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993973594095355,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37818751bb2fa4540a2e8b87355d2631,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:119ffd076025ce6945c5cd3cbad2f0ae89f673daf7b141c31adec41da77a52c1,PodSandboxId:7f6e644265c579f1caa0dd98f2013f3df52336544cb063060ae2ce77b1c5041b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993973534125044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbacd3d6eed48eece5d493fdfcf5c160,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8426e19ac77455d95e36174f7592ac4d2405b87485e7e148c3d2b28e3940914,PodSandboxId:04a632a1e00e38da73c704734abf60ce3d3f7575f74afa528ef91382d2dabb9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993973307795433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name:
kube-apiserver-kubernetes-upgrade-192799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fa45888fac5573c47ae79f638a951a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983e9a9d3564673d60be9b167b3ea924686bc8b9ef3c0ddaed620f8ea76e1f24,PodSandboxId:285eb066f986f3036eb5231a96acf437ff7e06f6cbe86ed52665a9a53bbb0318,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993913053106871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g
2hp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7afe4c51-299a-42f2-aac4-9875ff619516,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c33c9c586c6a5937d2578c10be537cb7fa1d36fb29df2727c94c2ef4e2b26bb7,PodSandboxId:aabeaa80bdda46ad148304d4618f447fd56ddc58cb430c14dc05e4567196f861,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993913000156130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5xscn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b23617cb-0321-4820-bb7e-395394d09cdb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=584a1aba-ee26-4581-845a-35c12399e1ce name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1edbefa9b70e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   3f4c5bf70475b       storage-provisioner
	fe57e9b940541       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   3 seconds ago        Running             kube-proxy                2                   e53f1c0c45688       kube-proxy-jmk4d
	77af7cc5356b9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago        Running             etcd                      2                   7f6e644265c57       etcd-kubernetes-upgrade-192799
	fe68e32301796       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   7 seconds ago        Running             kube-controller-manager   2                   9c179cee5ea16       kube-controller-manager-kubernetes-upgrade-192799
	082fc08955c14       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago        Running             kube-apiserver            2                   04a632a1e00e3       kube-apiserver-kubernetes-upgrade-192799
	de9ba869e0cab       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   7 seconds ago        Running             kube-scheduler            2                   5485db014bfa9       kube-scheduler-kubernetes-upgrade-192799
	10cf32c4d610f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago       Running             coredns                   1                   be4db8336dbc3       coredns-6f6b679f8f-g2hp8
	8146b62c81eb1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago       Running             coredns                   1                   6f03b3db91bc6       coredns-6f6b679f8f-5xscn
	3cf3269b7080e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   23 seconds ago       Exited              storage-provisioner       1                   3f4c5bf70475b       storage-provisioner
	bdee655098222       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   23 seconds ago       Exited              kube-proxy                1                   e53f1c0c45688       kube-proxy-jmk4d
	3961cf85dbb73       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   24 seconds ago       Exited              kube-scheduler            1                   5485db014bfa9       kube-scheduler-kubernetes-upgrade-192799
	095d4d5c5a971       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   24 seconds ago       Exited              kube-controller-manager   1                   9c179cee5ea16       kube-controller-manager-kubernetes-upgrade-192799
	119ffd076025c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   24 seconds ago       Exited              etcd                      1                   7f6e644265c57       etcd-kubernetes-upgrade-192799
	d8426e19ac774       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   24 seconds ago       Exited              kube-apiserver            1                   04a632a1e00e3       kube-apiserver-kubernetes-upgrade-192799
	983e9a9d35646       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   285eb066f986f       coredns-6f6b679f8f-g2hp8
	c33c9c586c6a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   aabeaa80bdda4       coredns-6f6b679f8f-5xscn
	
	
	==> coredns [10cf32c4d610fdb1b72852c31408218c59638672664ef5916097b331ce3756b8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8146b62c81eb1e4eab1e89ca915cd346b48876633ef7f8eae459d3b886de3b43] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [983e9a9d3564673d60be9b167b3ea924686bc8b9ef3c0ddaed620f8ea76e1f24] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1609131732]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:45:13.276) (total time: 30005ms):
	Trace[1609131732]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (18:45:43.279)
	Trace[1609131732]: [30.005435589s] [30.005435589s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[224316661]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:45:13.278) (total time: 30004ms):
	Trace[224316661]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:45:43.279)
	Trace[224316661]: [30.004414022s] [30.004414022s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[210707862]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:45:13.278) (total time: 30003ms):
	Trace[210707862]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:45:43.281)
	Trace[210707862]: [30.003944564s] [30.003944564s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c33c9c586c6a5937d2578c10be537cb7fa1d36fb29df2727c94c2ef4e2b26bb7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2070062840]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:45:13.277) (total time: 30002ms):
	Trace[2070062840]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:45:43.279)
	Trace[2070062840]: [30.002336477s] [30.002336477s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[223166121]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:45:13.278) (total time: 30002ms):
	Trace[223166121]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:45:43.280)
	Trace[223166121]: [30.002519554s] [30.002519554s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[282795357]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:45:13.278) (total time: 30002ms):
	Trace[282795357]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:45:43.281)
	Trace[282795357]: [30.002701403s] [30.002701403s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-192799
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-192799
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:45:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-192799
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:46:33 +0000   Tue, 10 Sep 2024 18:45:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:46:33 +0000   Tue, 10 Sep 2024 18:45:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:46:33 +0000   Tue, 10 Sep 2024 18:45:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:46:33 +0000   Tue, 10 Sep 2024 18:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    kubernetes-upgrade-192799
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 133f04b243dd4df5ac2663e3e65d4e97
	  System UUID:                133f04b2-43dd-4df5-ac26-63e3e65d4e97
	  Boot ID:                    011f569e-51b0-4759-9938-de0b4e8c3790
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5xscn                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     86s
	  kube-system                 coredns-6f6b679f8f-g2hp8                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     86s
	  kube-system                 etcd-kubernetes-upgrade-192799                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         87s
	  kube-system                 kube-apiserver-kubernetes-upgrade-192799             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-192799    200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-jmk4d                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-kubernetes-upgrade-192799             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 84s                 kube-proxy       
	  Normal  Starting                 3s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  97s (x8 over 100s)  kubelet          Node kubernetes-upgrade-192799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 100s)  kubelet          Node kubernetes-upgrade-192799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x7 over 100s)  kubelet          Node kubernetes-upgrade-192799 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           87s                 node-controller  Node kubernetes-upgrade-192799 event: Registered Node kubernetes-upgrade-192799 in Controller
	  Normal  Starting                 9s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)     kubelet          Node kubernetes-upgrade-192799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)     kubelet          Node kubernetes-upgrade-192799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)     kubelet          Node kubernetes-upgrade-192799 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                  node-controller  Node kubernetes-upgrade-192799 event: Registered Node kubernetes-upgrade-192799 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.907966] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.053291] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071434] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.181188] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.114516] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.280278] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +4.163611] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +1.896586] systemd-fstab-generator[831]: Ignoring "noauto" option for root device
	[  +0.061397] kauditd_printk_skb: 158 callbacks suppressed
	[Sep10 18:45] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.085542] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.576193] kauditd_printk_skb: 21 callbacks suppressed
	[ +40.146519] kauditd_printk_skb: 76 callbacks suppressed
	[Sep10 18:46] systemd-fstab-generator[2187]: Ignoring "noauto" option for root device
	[  +0.175225] systemd-fstab-generator[2199]: Ignoring "noauto" option for root device
	[  +0.183805] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.164669] systemd-fstab-generator[2225]: Ignoring "noauto" option for root device
	[  +0.298105] systemd-fstab-generator[2253]: Ignoring "noauto" option for root device
	[  +0.928580] systemd-fstab-generator[2402]: Ignoring "noauto" option for root device
	[  +3.651374] kauditd_printk_skb: 228 callbacks suppressed
	[ +13.076462] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	[  +6.293807] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[  +0.131555] kauditd_printk_skb: 51 callbacks suppressed
	
	
	==> etcd [119ffd076025ce6945c5cd3cbad2f0ae89f673daf7b141c31adec41da77a52c1] <==
	{"level":"info","ts":"2024-09-10T18:46:16.280609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T18:46:16.280722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 2"}
	{"level":"info","ts":"2024-09-10T18:46:16.280741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T18:46:16.280747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2024-09-10T18:46:16.280755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 3"}
	{"level":"info","ts":"2024-09-10T18:46:16.280763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2024-09-10T18:46:16.282739Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"44b3a0f32f80bb09","local-member-attributes":"{Name:kubernetes-upgrade-192799 ClientURLs:[https://192.168.39.145:2379]}","request-path":"/0/members/44b3a0f32f80bb09/attributes","cluster-id":"33ee9922f2bf4379","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:46:16.282809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:46:16.283249Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:46:16.284048Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:46:16.284278Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:46:16.284926Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
	{"level":"info","ts":"2024-09-10T18:46:16.285414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:46:16.285459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:46:16.285806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T18:46:18.143776Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-10T18:46:18.143886Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-192799","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	{"level":"warn","ts":"2024-09-10T18:46:18.144026Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T18:46:18.144109Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T18:46:18.144240Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-10T18:46:18.144335Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-10T18:46:18.168654Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"44b3a0f32f80bb09","current-leader-member-id":"44b3a0f32f80bb09"}
	{"level":"info","ts":"2024-09-10T18:46:18.171920Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-09-10T18:46:18.172066Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-09-10T18:46:18.172102Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-192799","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	
	
	==> etcd [77af7cc5356b9d835b428d54cd36c5e430fbb1766645f589bcb0c8baa0d2fecc] <==
	{"level":"info","ts":"2024-09-10T18:46:30.406361Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","added-peer-id":"44b3a0f32f80bb09","added-peer-peer-urls":["https://192.168.39.145:2380"]}
	{"level":"info","ts":"2024-09-10T18:46:30.406444Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:46:30.406482Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:46:30.415887Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:46:30.418097Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T18:46:30.418186Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-09-10T18:46:30.418345Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-09-10T18:46:30.420138Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T18:46:30.420206Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"44b3a0f32f80bb09","initial-advertise-peer-urls":["https://192.168.39.145:2380"],"listen-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.145:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T18:46:32.192781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-10T18:46:32.192897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-10T18:46:32.192957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2024-09-10T18:46:32.193000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 4"}
	{"level":"info","ts":"2024-09-10T18:46:32.193026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 4"}
	{"level":"info","ts":"2024-09-10T18:46:32.193061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 4"}
	{"level":"info","ts":"2024-09-10T18:46:32.193087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 4"}
	{"level":"info","ts":"2024-09-10T18:46:32.198452Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"44b3a0f32f80bb09","local-member-attributes":"{Name:kubernetes-upgrade-192799 ClientURLs:[https://192.168.39.145:2379]}","request-path":"/0/members/44b3a0f32f80bb09/attributes","cluster-id":"33ee9922f2bf4379","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:46:32.198626Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:46:32.198722Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:46:32.198760Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:46:32.198778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:46:32.199636Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:46:32.200448Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T18:46:32.199680Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:46:32.201314Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
	
	
	==> kernel <==
	 18:46:38 up 2 min,  0 users,  load average: 1.75, 0.64, 0.23
	Linux kubernetes-upgrade-192799 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [082fc08955c141da1b4fb10abbfd8176e0ee2aedc01e90e061ce744614f48233] <==
	I0910 18:46:33.573614       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0910 18:46:33.577239       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:46:33.577276       1 policy_source.go:224] refreshing policies
	I0910 18:46:33.602680       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 18:46:33.603560       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 18:46:33.603781       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0910 18:46:33.604894       1 aggregator.go:171] initial CRD sync complete...
	I0910 18:46:33.604996       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 18:46:33.605136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 18:46:33.605178       1 cache.go:39] Caches are synced for autoregister controller
	I0910 18:46:33.606878       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0910 18:46:33.606940       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 18:46:33.607899       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 18:46:33.608097       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 18:46:33.608454       1 shared_informer.go:320] Caches are synced for configmaps
	I0910 18:46:33.616696       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0910 18:46:33.649016       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 18:46:34.414296       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0910 18:46:35.225144       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:46:35.235763       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:46:35.279186       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:46:35.413674       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 18:46:35.433616       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0910 18:46:36.484437       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:46:36.981312       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d8426e19ac77455d95e36174f7592ac4d2405b87485e7e148c3d2b28e3940914] <==
	W0910 18:46:27.509074       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.511562       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.512831       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.579212       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.628825       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.642238       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.642322       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.658096       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.682747       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.685147       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.725010       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.750895       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.776395       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.807149       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.808624       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.820303       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.880326       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.919423       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.928160       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.953742       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:27.993967       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:28.153345       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:28.205652       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:28.216396       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 18:46:28.273974       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [095d4d5c5a9716c082af526256cae6715b2efbeb13d9e1690ebaf771a53cf711] <==
	I0910 18:46:15.517662       1 serving.go:386] Generated self-signed cert in-memory
	I0910 18:46:15.840775       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0910 18:46:15.840815       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:46:15.842553       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0910 18:46:15.842736       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0910 18:46:15.842803       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0910 18:46:15.842958       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [fe68e32301796feff0fdc4d3b92c9f7ed718992f3565edce400b7e89c8bf3e06] <==
	I0910 18:46:36.894024       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0910 18:46:36.900353       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0910 18:46:36.922915       1 shared_informer.go:320] Caches are synced for taint
	I0910 18:46:36.923141       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0910 18:46:36.923208       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-192799"
	I0910 18:46:36.923244       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0910 18:46:36.924055       1 shared_informer.go:320] Caches are synced for node
	I0910 18:46:36.924087       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0910 18:46:36.924103       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0910 18:46:36.924107       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0910 18:46:36.924111       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0910 18:46:36.924253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-192799"
	I0910 18:46:36.929266       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0910 18:46:36.929342       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-192799"
	I0910 18:46:36.941435       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0910 18:46:36.976953       1 shared_informer.go:320] Caches are synced for endpoint
	I0910 18:46:37.040182       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 18:46:37.047796       1 shared_informer.go:320] Caches are synced for disruption
	I0910 18:46:37.079598       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0910 18:46:37.079894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="95.464µs"
	I0910 18:46:37.090236       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 18:46:37.130146       1 shared_informer.go:320] Caches are synced for deployment
	I0910 18:46:37.530322       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 18:46:37.551772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 18:46:37.551797       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [bdee655098222297e82d91aa6cb33152325e144dd5ab6ba9db1ab14052a91bf2] <==
	
	
	==> kube-proxy [fe57e9b94054144de4c7735978012e930d3ff08727e74faf1937c449cf3a9788] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:46:34.082941       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:46:34.095138       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	E0910 18:46:34.095205       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:46:34.137216       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:46:34.137277       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:46:34.137305       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:46:34.140176       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:46:34.140432       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:46:34.140459       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:46:34.141893       1 config.go:197] "Starting service config controller"
	I0910 18:46:34.141933       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:46:34.141968       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:46:34.141972       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:46:34.142428       1 config.go:326] "Starting node config controller"
	I0910 18:46:34.142454       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:46:34.242078       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:46:34.242143       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:46:34.242598       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3961cf85dbb735a4899f9a67c916992c41a359d2aa09979ad6ed87d6aa6a29b3] <==
	I0910 18:46:15.463751       1 serving.go:386] Generated self-signed cert in-memory
	I0910 18:46:17.854239       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 18:46:17.854355       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0910 18:46:17.854465       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	W0910 18:46:17.866700       1 secure_serving.go:111] Initial population of client CA failed: Get "https://192.168.39.145:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": context canceled
	I0910 18:46:17.869312       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0910 18:46:17.870385       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0910 18:46:17.870595       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [de9ba869e0cab811ef95447ef937a384e3edfe9d530e24de9d396bd44b223391] <==
	I0910 18:46:31.005752       1 serving.go:386] Generated self-signed cert in-memory
	W0910 18:46:33.509959       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 18:46:33.512642       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 18:46:33.512726       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 18:46:33.512761       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 18:46:33.550221       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 18:46:33.550319       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:46:33.558929       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 18:46:33.559589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 18:46:33.559959       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 18:46:33.562764       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 18:46:33.663579       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 18:46:29 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:29.728178    3439 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/bbacd3d6eed48eece5d493fdfcf5c160-etcd-certs\") pod \"etcd-kubernetes-upgrade-192799\" (UID: \"bbacd3d6eed48eece5d493fdfcf5c160\") " pod="kube-system/etcd-kubernetes-upgrade-192799"
	Sep 10 18:46:29 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:29.829592    3439 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fa45888fac5573c47ae79f638a951a3-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-192799\" (UID: \"9fa45888fac5573c47ae79f638a951a3\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-192799"
	Sep 10 18:46:29 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:29.829792    3439 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fa45888fac5573c47ae79f638a951a3-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-192799\" (UID: \"9fa45888fac5573c47ae79f638a951a3\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-192799"
	Sep 10 18:46:29 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:29.829879    3439 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fa45888fac5573c47ae79f638a951a3-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-192799\" (UID: \"9fa45888fac5573c47ae79f638a951a3\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-192799"
	Sep 10 18:46:29 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:29.905464    3439 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-192799"
	Sep 10 18:46:29 kubernetes-upgrade-192799 kubelet[3439]: E0910 18:46:29.906307    3439 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.145:8443: connect: connection refused" node="kubernetes-upgrade-192799"
	Sep 10 18:46:30 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:30.013827    3439 scope.go:117] "RemoveContainer" containerID="095d4d5c5a9716c082af526256cae6715b2efbeb13d9e1690ebaf771a53cf711"
	Sep 10 18:46:30 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:30.014260    3439 scope.go:117] "RemoveContainer" containerID="3961cf85dbb735a4899f9a67c916992c41a359d2aa09979ad6ed87d6aa6a29b3"
	Sep 10 18:46:30 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:30.016084    3439 scope.go:117] "RemoveContainer" containerID="119ffd076025ce6945c5cd3cbad2f0ae89f673daf7b141c31adec41da77a52c1"
	Sep 10 18:46:30 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:30.016575    3439 scope.go:117] "RemoveContainer" containerID="d8426e19ac77455d95e36174f7592ac4d2405b87485e7e148c3d2b28e3940914"
	Sep 10 18:46:30 kubernetes-upgrade-192799 kubelet[3439]: E0910 18:46:30.119413    3439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-192799?timeout=10s\": dial tcp 192.168.39.145:8443: connect: connection refused" interval="800ms"
	Sep 10 18:46:30 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:30.308427    3439 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-192799"
	Sep 10 18:46:30 kubernetes-upgrade-192799 kubelet[3439]: E0910 18:46:30.309372    3439 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.145:8443: connect: connection refused" node="kubernetes-upgrade-192799"
	Sep 10 18:46:31 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:31.111569    3439 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-192799"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.500753    3439 apiserver.go:52] "Watching apiserver"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.613538    3439 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.647703    3439 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e7c36de5-52a2-44e9-9159-a6a3963562d9-tmp\") pod \"storage-provisioner\" (UID: \"e7c36de5-52a2-44e9-9159-a6a3963562d9\") " pod="kube-system/storage-provisioner"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.647860    3439 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e68bd70-3dc2-4c8f-818d-0859d202738b-lib-modules\") pod \"kube-proxy-jmk4d\" (UID: \"9e68bd70-3dc2-4c8f-818d-0859d202738b\") " pod="kube-system/kube-proxy-jmk4d"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.647964    3439 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e68bd70-3dc2-4c8f-818d-0859d202738b-xtables-lock\") pod \"kube-proxy-jmk4d\" (UID: \"9e68bd70-3dc2-4c8f-818d-0859d202738b\") " pod="kube-system/kube-proxy-jmk4d"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.649261    3439 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-192799"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.649385    3439 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-192799"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.649442    3439 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.650460    3439 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.846619    3439 scope.go:117] "RemoveContainer" containerID="3cf3269b7080ecb8a2dd56351f96b8d435cb1f7820635c5178a8b49324d0a826"
	Sep 10 18:46:33 kubernetes-upgrade-192799 kubelet[3439]: I0910 18:46:33.847200    3439 scope.go:117] "RemoveContainer" containerID="bdee655098222297e82d91aa6cb33152325e144dd5ab6ba9db1ab14052a91bf2"
	
	
	==> storage-provisioner [1edbefa9b70e3357ebd1e986566c217bbe2c41a69bbf6a1d4e4ed957d493cc98] <==
	I0910 18:46:33.972050       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 18:46:33.996267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 18:46:33.996662       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [3cf3269b7080ecb8a2dd56351f96b8d435cb1f7820635c5178a8b49324d0a826] <==
	I0910 18:46:14.893313       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:46:37.054425   57179 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19598-5973/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-192799 -n kubernetes-upgrade-192799
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-192799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-192799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-192799
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-192799: (1.302358997s)
--- FAIL: TestKubernetesUpgrade (454.54s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-459729 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-459729 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.254256519s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-459729] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-459729" primary control-plane node in "pause-459729" cluster
	* Updating the running kvm2 "pause-459729" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-459729" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:42:32.780187   53190 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:42:32.780573   53190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:42:32.780605   53190 out.go:358] Setting ErrFile to fd 2...
	I0910 18:42:32.780620   53190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:42:32.780898   53190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:42:32.781633   53190 out.go:352] Setting JSON to false
	I0910 18:42:32.783010   53190 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5105,"bootTime":1725988648,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:42:32.783118   53190 start.go:139] virtualization: kvm guest
	I0910 18:42:32.785769   53190 out.go:177] * [pause-459729] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:42:32.787209   53190 notify.go:220] Checking for updates...
	I0910 18:42:32.787241   53190 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:42:32.788544   53190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:42:32.790353   53190 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:42:32.791973   53190 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:42:32.793133   53190 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:42:32.794279   53190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:42:32.795954   53190 config.go:182] Loaded profile config "pause-459729": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:42:32.796635   53190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:42:32.796724   53190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:42:32.824698   53190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I0910 18:42:32.825507   53190 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:42:32.826116   53190 main.go:141] libmachine: Using API Version  1
	I0910 18:42:32.826136   53190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:42:32.826566   53190 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:42:32.826746   53190 main.go:141] libmachine: (pause-459729) Calling .DriverName
	I0910 18:42:32.827199   53190 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:42:32.827598   53190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:42:32.827634   53190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:42:32.844136   53190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0910 18:42:32.844550   53190 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:42:32.845232   53190 main.go:141] libmachine: Using API Version  1
	I0910 18:42:32.845249   53190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:42:32.845614   53190 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:42:32.845763   53190 main.go:141] libmachine: (pause-459729) Calling .DriverName
	I0910 18:42:32.892439   53190 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:42:32.893963   53190 start.go:297] selected driver: kvm2
	I0910 18:42:32.893984   53190 start.go:901] validating driver "kvm2" against &{Name:pause-459729 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:42:32.894158   53190 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:42:32.894625   53190 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:42:32.894711   53190 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:42:32.917835   53190 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:42:32.918848   53190 cni.go:84] Creating CNI manager for ""
	I0910 18:42:32.918869   53190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:42:32.918970   53190 start.go:340] cluster config:
	{Name:pause-459729 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-459729 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:42:32.919158   53190 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:42:32.922492   53190 out.go:177] * Starting "pause-459729" primary control-plane node in "pause-459729" cluster
	I0910 18:42:32.923812   53190 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:42:32.923862   53190 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:42:32.923874   53190 cache.go:56] Caching tarball of preloaded images
	I0910 18:42:32.923982   53190 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:42:32.923994   53190 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 18:42:32.924154   53190 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/pause-459729/config.json ...
	I0910 18:42:32.924475   53190 start.go:360] acquireMachinesLock for pause-459729: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:42:32.924531   53190 start.go:364] duration metric: took 32.65µs to acquireMachinesLock for "pause-459729"
	I0910 18:42:32.924547   53190 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:42:32.924561   53190 fix.go:54] fixHost starting: 
	I0910 18:42:32.924927   53190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:42:32.924969   53190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:42:32.945948   53190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0910 18:42:32.946354   53190 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:42:32.946880   53190 main.go:141] libmachine: Using API Version  1
	I0910 18:42:32.946901   53190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:42:32.947218   53190 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:42:32.947408   53190 main.go:141] libmachine: (pause-459729) Calling .DriverName
	I0910 18:42:32.947577   53190 main.go:141] libmachine: (pause-459729) Calling .GetState
	I0910 18:42:32.949831   53190 fix.go:112] recreateIfNeeded on pause-459729: state=Running err=<nil>
	W0910 18:42:32.949852   53190 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:42:32.951645   53190 out.go:177] * Updating the running kvm2 "pause-459729" VM ...
	I0910 18:42:32.952830   53190 machine.go:93] provisionDockerMachine start ...
	I0910 18:42:32.952859   53190 main.go:141] libmachine: (pause-459729) Calling .DriverName
	I0910 18:42:32.953163   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:32.956370   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:32.956749   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:32.956769   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:32.956985   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHPort
	I0910 18:42:32.957217   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:32.957366   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:32.957500   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHUsername
	I0910 18:42:32.957646   53190 main.go:141] libmachine: Using SSH client type: native
	I0910 18:42:32.957873   53190 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0910 18:42:32.957882   53190 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:42:33.099833   53190 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-459729
	
	I0910 18:42:33.099901   53190 main.go:141] libmachine: (pause-459729) Calling .GetMachineName
	I0910 18:42:33.100548   53190 buildroot.go:166] provisioning hostname "pause-459729"
	I0910 18:42:33.100618   53190 main.go:141] libmachine: (pause-459729) Calling .GetMachineName
	I0910 18:42:33.100898   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:33.104277   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.104833   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:33.104854   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.105172   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHPort
	I0910 18:42:33.105344   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:33.105629   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:33.105790   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHUsername
	I0910 18:42:33.106008   53190 main.go:141] libmachine: Using SSH client type: native
	I0910 18:42:33.106209   53190 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0910 18:42:33.106222   53190 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-459729 && echo "pause-459729" | sudo tee /etc/hostname
	I0910 18:42:33.261996   53190 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-459729
	
	I0910 18:42:33.262030   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:33.265598   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.266187   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:33.266280   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.266806   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHPort
	I0910 18:42:33.267011   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:33.267193   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:33.267356   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHUsername
	I0910 18:42:33.267526   53190 main.go:141] libmachine: Using SSH client type: native
	I0910 18:42:33.267761   53190 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0910 18:42:33.267786   53190 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-459729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-459729/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-459729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:42:33.404082   53190 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:42:33.404111   53190 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:42:33.404151   53190 buildroot.go:174] setting up certificates
	I0910 18:42:33.404163   53190 provision.go:84] configureAuth start
	I0910 18:42:33.404177   53190 main.go:141] libmachine: (pause-459729) Calling .GetMachineName
	I0910 18:42:33.404514   53190 main.go:141] libmachine: (pause-459729) Calling .GetIP
	I0910 18:42:33.408217   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.409343   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:33.409381   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.409708   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:33.412942   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.413406   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:33.413444   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.413594   53190 provision.go:143] copyHostCerts
	I0910 18:42:33.413657   53190 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:42:33.413666   53190 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:42:33.413733   53190 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:42:33.413876   53190 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:42:33.413883   53190 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:42:33.413904   53190 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:42:33.413964   53190 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:42:33.413968   53190 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:42:33.413985   53190 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:42:33.414045   53190 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.pause-459729 san=[127.0.0.1 192.168.50.99 localhost minikube pause-459729]
	I0910 18:42:33.611969   53190 provision.go:177] copyRemoteCerts
	I0910 18:42:33.612047   53190 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:42:33.612078   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:33.615671   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.616168   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:33.616200   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.616395   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHPort
	I0910 18:42:33.616570   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:33.616745   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHUsername
	I0910 18:42:33.616886   53190 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/pause-459729/id_rsa Username:docker}
	I0910 18:42:33.715684   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:42:33.752370   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0910 18:42:33.789546   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 18:42:33.821981   53190 provision.go:87] duration metric: took 417.802401ms to configureAuth
	I0910 18:42:33.822015   53190 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:42:33.822297   53190 config.go:182] Loaded profile config "pause-459729": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:42:33.822413   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:33.825696   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.826104   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:33.826138   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:33.826337   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHPort
	I0910 18:42:33.826542   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:33.826732   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:33.826895   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHUsername
	I0910 18:42:33.827074   53190 main.go:141] libmachine: Using SSH client type: native
	I0910 18:42:33.827309   53190 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0910 18:42:33.827335   53190 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:42:39.401985   53190 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:42:39.402015   53190 machine.go:96] duration metric: took 6.449168018s to provisionDockerMachine
	I0910 18:42:39.402026   53190 start.go:293] postStartSetup for "pause-459729" (driver="kvm2")
	I0910 18:42:39.402036   53190 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:42:39.402051   53190 main.go:141] libmachine: (pause-459729) Calling .DriverName
	I0910 18:42:39.402501   53190 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:42:39.402527   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:39.405118   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.405534   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:39.405564   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.405716   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHPort
	I0910 18:42:39.405893   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:39.406032   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHUsername
	I0910 18:42:39.406164   53190 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/pause-459729/id_rsa Username:docker}
	I0910 18:42:39.491856   53190 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:42:39.496302   53190 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:42:39.496322   53190 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:42:39.496384   53190 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:42:39.496468   53190 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:42:39.496585   53190 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:42:39.506036   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:42:39.532872   53190 start.go:296] duration metric: took 130.831667ms for postStartSetup
	I0910 18:42:39.532914   53190 fix.go:56] duration metric: took 6.608360155s for fixHost
	I0910 18:42:39.532938   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:39.535868   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.536189   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:39.536245   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.536424   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHPort
	I0910 18:42:39.536618   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:39.536806   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:39.536957   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHUsername
	I0910 18:42:39.537159   53190 main.go:141] libmachine: Using SSH client type: native
	I0910 18:42:39.537369   53190 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0910 18:42:39.537385   53190 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:42:39.645682   53190 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725993759.632951618
	
	I0910 18:42:39.645709   53190 fix.go:216] guest clock: 1725993759.632951618
	I0910 18:42:39.645719   53190 fix.go:229] Guest: 2024-09-10 18:42:39.632951618 +0000 UTC Remote: 2024-09-10 18:42:39.532919075 +0000 UTC m=+6.801442573 (delta=100.032543ms)
	I0910 18:42:39.645743   53190 fix.go:200] guest clock delta is within tolerance: 100.032543ms
	I0910 18:42:39.645749   53190 start.go:83] releasing machines lock for "pause-459729", held for 6.721209693s
	I0910 18:42:39.645766   53190 main.go:141] libmachine: (pause-459729) Calling .DriverName
	I0910 18:42:39.646017   53190 main.go:141] libmachine: (pause-459729) Calling .GetIP
	I0910 18:42:39.648814   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.649221   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:39.649285   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.649444   53190 main.go:141] libmachine: (pause-459729) Calling .DriverName
	I0910 18:42:39.649926   53190 main.go:141] libmachine: (pause-459729) Calling .DriverName
	I0910 18:42:39.650113   53190 main.go:141] libmachine: (pause-459729) Calling .DriverName
	I0910 18:42:39.650201   53190 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:42:39.650277   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:39.650312   53190 ssh_runner.go:195] Run: cat /version.json
	I0910 18:42:39.650334   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHHostname
	I0910 18:42:39.653055   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.653384   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.653415   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:39.653436   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.653822   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHPort
	I0910 18:42:39.653903   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:39.653932   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:39.653972   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:39.654103   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHPort
	I0910 18:42:39.654167   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHUsername
	I0910 18:42:39.654235   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHKeyPath
	I0910 18:42:39.654296   53190 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/pause-459729/id_rsa Username:docker}
	I0910 18:42:39.654356   53190 main.go:141] libmachine: (pause-459729) Calling .GetSSHUsername
	I0910 18:42:39.654476   53190 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/pause-459729/id_rsa Username:docker}
	I0910 18:42:39.734500   53190 ssh_runner.go:195] Run: systemctl --version
	I0910 18:42:39.756245   53190 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:42:39.913755   53190 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:42:39.920610   53190 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:42:39.920677   53190 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:42:39.930055   53190 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 18:42:39.930074   53190 start.go:495] detecting cgroup driver to use...
	I0910 18:42:39.930120   53190 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:42:39.949811   53190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:42:39.963634   53190 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:42:39.963695   53190 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:42:39.977902   53190 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:42:39.991664   53190 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:42:40.142414   53190 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:42:40.291493   53190 docker.go:233] disabling docker service ...
	I0910 18:42:40.291576   53190 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:42:40.310922   53190 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:42:40.329765   53190 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:42:40.479312   53190 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:42:40.630892   53190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:42:40.647591   53190 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:42:40.670292   53190 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:42:40.670364   53190 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:42:40.683361   53190 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:42:40.683435   53190 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:42:40.696388   53190 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:42:40.709021   53190 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:42:40.720104   53190 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:42:40.731221   53190 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:42:40.742985   53190 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:42:40.757641   53190 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:42:40.768602   53190 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:42:40.779759   53190 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:42:40.790473   53190 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:42:40.926960   53190 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:42:41.128250   53190 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:42:41.128360   53190 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:42:41.133433   53190 start.go:563] Will wait 60s for crictl version
	I0910 18:42:41.133490   53190 ssh_runner.go:195] Run: which crictl
	I0910 18:42:41.137957   53190 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:42:41.178569   53190 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:42:41.178654   53190 ssh_runner.go:195] Run: crio --version
	I0910 18:42:41.210915   53190 ssh_runner.go:195] Run: crio --version
	I0910 18:42:41.245312   53190 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:42:41.246430   53190 main.go:141] libmachine: (pause-459729) Calling .GetIP
	I0910 18:42:41.249156   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:41.249540   53190 main.go:141] libmachine: (pause-459729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:02", ip: ""} in network mk-pause-459729: {Iface:virbr2 ExpiryTime:2024-09-10 19:41:25 +0000 UTC Type:0 Mac:52:54:00:25:60:02 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:pause-459729 Clientid:01:52:54:00:25:60:02}
	I0910 18:42:41.249568   53190 main.go:141] libmachine: (pause-459729) DBG | domain pause-459729 has defined IP address 192.168.50.99 and MAC address 52:54:00:25:60:02 in network mk-pause-459729
	I0910 18:42:41.249751   53190 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0910 18:42:41.254475   53190 kubeadm.go:883] updating cluster {Name:pause-459729 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:42:41.254587   53190 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:42:41.254622   53190 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:42:41.298457   53190 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:42:41.298484   53190 crio.go:433] Images already preloaded, skipping extraction
	I0910 18:42:41.298539   53190 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:42:41.333256   53190 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:42:41.333281   53190 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:42:41.333288   53190 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.31.0 crio true true} ...
	I0910 18:42:41.333376   53190 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-459729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:42:41.333435   53190 ssh_runner.go:195] Run: crio config
	I0910 18:42:41.387144   53190 cni.go:84] Creating CNI manager for ""
	I0910 18:42:41.387171   53190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:42:41.387187   53190 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:42:41.387214   53190 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-459729 NodeName:pause-459729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:42:41.387391   53190 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-459729"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:42:41.387461   53190 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:42:41.399561   53190 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:42:41.399624   53190 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:42:41.411194   53190 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0910 18:42:41.430994   53190 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:42:41.453503   53190 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0910 18:42:41.473535   53190 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0910 18:42:41.478074   53190 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:42:41.636604   53190 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:42:41.651924   53190 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/pause-459729 for IP: 192.168.50.99
	I0910 18:42:41.651955   53190 certs.go:194] generating shared ca certs ...
	I0910 18:42:41.651970   53190 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:42:41.652130   53190 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:42:41.652180   53190 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:42:41.652195   53190 certs.go:256] generating profile certs ...
	I0910 18:42:41.652291   53190 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/pause-459729/client.key
	I0910 18:42:41.652380   53190 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/pause-459729/apiserver.key.60dfca90
	I0910 18:42:41.652428   53190 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/pause-459729/proxy-client.key
	I0910 18:42:41.652576   53190 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:42:41.652613   53190 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:42:41.652626   53190 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:42:41.652660   53190 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:42:41.652689   53190 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:42:41.652718   53190 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:42:41.652773   53190 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:42:41.653587   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:42:41.678529   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:42:41.703281   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:42:41.727780   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:42:41.754817   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/pause-459729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0910 18:42:41.778636   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/pause-459729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:42:41.801390   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/pause-459729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:42:41.825423   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/pause-459729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:42:41.852340   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:42:41.876675   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:42:41.903189   53190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:42:41.928750   53190 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:42:41.946858   53190 ssh_runner.go:195] Run: openssl version
	I0910 18:42:41.953005   53190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:42:41.965029   53190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:42:41.969852   53190 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:42:41.969902   53190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:42:41.975975   53190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:42:41.988820   53190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:42:42.000313   53190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:42:42.004813   53190 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:42:42.004868   53190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:42:42.010746   53190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:42:42.020996   53190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:42:42.032179   53190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:42:42.036637   53190 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:42:42.036685   53190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:42:42.042397   53190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:42:42.086633   53190 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:42:42.096785   53190 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:42:42.135224   53190 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:42:42.159729   53190 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:42:42.191891   53190 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:42:42.229678   53190 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:42:42.244751   53190 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:42:42.285116   53190 kubeadm.go:392] StartCluster: {Name:pause-459729 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:42:42.285271   53190 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:42:42.285340   53190 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:42:42.428831   53190 cri.go:89] found id: "28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12"
	I0910 18:42:42.428860   53190 cri.go:89] found id: "9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03"
	I0910 18:42:42.428866   53190 cri.go:89] found id: "869d0c445736ab1c1e22287a99c09a28eb8d951538c2cc0f9aca496302c2862e"
	I0910 18:42:42.428872   53190 cri.go:89] found id: "2c88a17a89cb923927e6fead3c7e7dd6ce7030e7d5e86dd53a16f870f0eef956"
	I0910 18:42:42.428877   53190 cri.go:89] found id: "c090d45a33d0ab688e75dd982dc4fa3135d411c58f25896f53d9ebaea47e23f1"
	I0910 18:42:42.428882   53190 cri.go:89] found id: "cd2669ab8283f4b8b6c3cc90aa3146ef9ad5b33beabf738b23571bb96f5864f1"
	I0910 18:42:42.428886   53190 cri.go:89] found id: ""
	I0910 18:42:42.428936   53190 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-459729 -n pause-459729
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-459729 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-459729 logs -n 25: (1.915417192s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-642043 sudo cat      | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | /etc/containerd/config.toml    |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo          | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | containerd config dump         |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo          | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | systemctl status crio --all    |                           |         |         |                     |                     |
	|         | --full --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo          | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | systemctl cat crio --no-pager  |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo find     | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | /etc/crio -type f -exec sh -c  |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo crio     | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | config                         |                           |         |         |                     |                     |
	| delete  | -p cilium-642043               | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC | 10 Sep 24 18:39 UTC |
	| start   | -p running-upgrade-926585      | minikube                  | jenkins | v1.26.0 | 10 Sep 24 18:39 UTC | 10 Sep 24 18:41 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-358325 stop    | minikube                  | jenkins | v1.26.0 | 10 Sep 24 18:40 UTC | 10 Sep 24 18:40 UTC |
	| start   | -p stopped-upgrade-358325      | stopped-upgrade-358325    | jenkins | v1.34.0 | 10 Sep 24 18:40 UTC | 10 Sep 24 18:41 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-174877         | offline-crio-174877       | jenkins | v1.34.0 | 10 Sep 24 18:40 UTC | 10 Sep 24 18:40 UTC |
	| start   | -p pause-459729 --memory=2048  | pause-459729              | jenkins | v1.34.0 | 10 Sep 24 18:40 UTC | 10 Sep 24 18:42 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-926585      | running-upgrade-926585    | jenkins | v1.34.0 | 10 Sep 24 18:41 UTC | 10 Sep 24 18:43 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-358325      | stopped-upgrade-358325    | jenkins | v1.34.0 | 10 Sep 24 18:41 UTC | 10 Sep 24 18:41 UTC |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:41 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:41 UTC | 10 Sep 24 18:42 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-459729                | pause-459729              | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:43 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:42 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:42 UTC |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:43 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-926585      | running-upgrade-926585    | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	| start   | -p force-systemd-flag-652506   | force-systemd-flag-652506 | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-229565 sudo    | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:43:23
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:43:23.102113   54227 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:43:23.102207   54227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:43:23.102211   54227 out.go:358] Setting ErrFile to fd 2...
	I0910 18:43:23.102214   54227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:43:23.102378   54227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:43:23.102852   54227 out.go:352] Setting JSON to false
	I0910 18:43:23.103860   54227 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5155,"bootTime":1725988648,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:43:23.103910   54227 start.go:139] virtualization: kvm guest
	I0910 18:43:23.105771   54227 out.go:177] * [NoKubernetes-229565] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:43:23.106991   54227 notify.go:220] Checking for updates...
	I0910 18:43:23.106999   54227 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:43:23.108537   54227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:43:23.109983   54227 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:43:23.111002   54227 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:43:23.112061   54227 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:43:23.113104   54227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:43:23.114612   54227 config.go:182] Loaded profile config "NoKubernetes-229565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0910 18:43:23.115183   54227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:43:23.115259   54227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:43:23.134452   54227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I0910 18:43:23.134890   54227 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:43:23.135488   54227 main.go:141] libmachine: Using API Version  1
	I0910 18:43:23.135502   54227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:43:23.135794   54227 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:43:23.135940   54227 main.go:141] libmachine: (NoKubernetes-229565) Calling .DriverName
	I0910 18:43:23.136161   54227 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0910 18:43:23.136173   54227 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:43:23.136445   54227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:43:23.136477   54227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:43:23.150786   54227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0910 18:43:23.151110   54227 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:43:23.151655   54227 main.go:141] libmachine: Using API Version  1
	I0910 18:43:23.151675   54227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:43:23.151957   54227 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:43:23.152124   54227 main.go:141] libmachine: (NoKubernetes-229565) Calling .DriverName
	I0910 18:43:23.188623   54227 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:43:23.189672   54227 start.go:297] selected driver: kvm2
	I0910 18:43:23.189680   54227 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-229565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-229565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.38 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:43:23.189823   54227 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:43:23.190201   54227 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:43:23.190261   54227 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:43:23.205174   54227 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:43:23.206176   54227 cni.go:84] Creating CNI manager for ""
	I0910 18:43:23.206189   54227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:43:23.206268   54227 start.go:340] cluster config:
	{Name:NoKubernetes-229565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-229565 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.38 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:43:23.206426   54227 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:43:23.208244   54227 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-229565
	I0910 18:43:19.213515   53892 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 18:43:19.213691   53892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:43:19.213733   53892 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:43:19.228382   53892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0910 18:43:19.228872   53892 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:43:19.229419   53892 main.go:141] libmachine: Using API Version  1
	I0910 18:43:19.229439   53892 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:43:19.229745   53892 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:43:19.229934   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .GetMachineName
	I0910 18:43:19.230064   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .DriverName
	I0910 18:43:19.230223   53892 start.go:159] libmachine.API.Create for "force-systemd-flag-652506" (driver="kvm2")
	I0910 18:43:19.230252   53892 client.go:168] LocalClient.Create starting
	I0910 18:43:19.230285   53892 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 18:43:19.230330   53892 main.go:141] libmachine: Decoding PEM data...
	I0910 18:43:19.230349   53892 main.go:141] libmachine: Parsing certificate...
	I0910 18:43:19.230417   53892 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 18:43:19.230443   53892 main.go:141] libmachine: Decoding PEM data...
	I0910 18:43:19.230462   53892 main.go:141] libmachine: Parsing certificate...
	I0910 18:43:19.230486   53892 main.go:141] libmachine: Running pre-create checks...
	I0910 18:43:19.230504   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .PreCreateCheck
	I0910 18:43:19.230828   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .GetConfigRaw
	I0910 18:43:19.231188   53892 main.go:141] libmachine: Creating machine...
	I0910 18:43:19.231201   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .Create
	I0910 18:43:19.231322   53892 main.go:141] libmachine: (force-systemd-flag-652506) Creating KVM machine...
	I0910 18:43:19.232500   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | found existing default KVM network
	I0910 18:43:19.233630   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.233471   53915 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e4:ea:60} reservation:<nil>}
	I0910 18:43:19.234450   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.234385   53915 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:42:2b:2c} reservation:<nil>}
	I0910 18:43:19.235493   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.235390   53915 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000308800}
	I0910 18:43:19.235518   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | created network xml: 
	I0910 18:43:19.235529   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | <network>
	I0910 18:43:19.235546   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   <name>mk-force-systemd-flag-652506</name>
	I0910 18:43:19.235563   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   <dns enable='no'/>
	I0910 18:43:19.235573   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   
	I0910 18:43:19.235584   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0910 18:43:19.235595   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |     <dhcp>
	I0910 18:43:19.235610   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0910 18:43:19.235617   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |     </dhcp>
	I0910 18:43:19.235623   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   </ip>
	I0910 18:43:19.235635   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   
	I0910 18:43:19.235652   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | </network>
	I0910 18:43:19.235660   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | 
	I0910 18:43:19.240653   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | trying to create private KVM network mk-force-systemd-flag-652506 192.168.61.0/24...
	I0910 18:43:19.313960   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | private KVM network mk-force-systemd-flag-652506 192.168.61.0/24 created
	I0910 18:43:19.314034   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.313951   53915 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:43:19.314058   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506 ...
	I0910 18:43:19.314076   53892 main.go:141] libmachine: (force-systemd-flag-652506) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 18:43:19.314090   53892 main.go:141] libmachine: (force-systemd-flag-652506) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:43:19.555458   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.555328   53915 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506/id_rsa...
	I0910 18:43:19.614201   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.614102   53915 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506/force-systemd-flag-652506.rawdisk...
	I0910 18:43:19.614229   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Writing magic tar header
	I0910 18:43:19.614248   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Writing SSH key tar header
	I0910 18:43:19.614281   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.614208   53915 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506 ...
	I0910 18:43:19.614312   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506
	I0910 18:43:19.614367   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 18:43:19.614385   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506 (perms=drwx------)
	I0910 18:43:19.614402   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:43:19.614421   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 18:43:19.614429   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 18:43:19.614439   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins
	I0910 18:43:19.614447   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home
	I0910 18:43:19.614459   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Skipping /home - not owner
	I0910 18:43:19.614479   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 18:43:19.614494   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 18:43:19.614511   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 18:43:19.614526   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 18:43:19.614535   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 18:43:19.614542   53892 main.go:141] libmachine: (force-systemd-flag-652506) Creating domain...
	I0910 18:43:19.615540   53892 main.go:141] libmachine: (force-systemd-flag-652506) define libvirt domain using xml: 
	I0910 18:43:19.615572   53892 main.go:141] libmachine: (force-systemd-flag-652506) <domain type='kvm'>
	I0910 18:43:19.615584   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <name>force-systemd-flag-652506</name>
	I0910 18:43:19.615596   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <memory unit='MiB'>2048</memory>
	I0910 18:43:19.615609   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <vcpu>2</vcpu>
	I0910 18:43:19.615620   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <features>
	I0910 18:43:19.615630   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <acpi/>
	I0910 18:43:19.615637   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <apic/>
	I0910 18:43:19.615645   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <pae/>
	I0910 18:43:19.615650   53892 main.go:141] libmachine: (force-systemd-flag-652506)     
	I0910 18:43:19.615656   53892 main.go:141] libmachine: (force-systemd-flag-652506)   </features>
	I0910 18:43:19.615660   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <cpu mode='host-passthrough'>
	I0910 18:43:19.615666   53892 main.go:141] libmachine: (force-systemd-flag-652506)   
	I0910 18:43:19.615671   53892 main.go:141] libmachine: (force-systemd-flag-652506)   </cpu>
	I0910 18:43:19.615692   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <os>
	I0910 18:43:19.615714   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <type>hvm</type>
	I0910 18:43:19.615724   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <boot dev='cdrom'/>
	I0910 18:43:19.615735   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <boot dev='hd'/>
	I0910 18:43:19.615748   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <bootmenu enable='no'/>
	I0910 18:43:19.615759   53892 main.go:141] libmachine: (force-systemd-flag-652506)   </os>
	I0910 18:43:19.615782   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <devices>
	I0910 18:43:19.615796   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <disk type='file' device='cdrom'>
	I0910 18:43:19.615817   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506/boot2docker.iso'/>
	I0910 18:43:19.615827   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <target dev='hdc' bus='scsi'/>
	I0910 18:43:19.615835   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <readonly/>
	I0910 18:43:19.615845   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </disk>
	I0910 18:43:19.615855   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <disk type='file' device='disk'>
	I0910 18:43:19.615872   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 18:43:19.615889   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506/force-systemd-flag-652506.rawdisk'/>
	I0910 18:43:19.615901   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <target dev='hda' bus='virtio'/>
	I0910 18:43:19.615913   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </disk>
	I0910 18:43:19.615924   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <interface type='network'>
	I0910 18:43:19.615964   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <source network='mk-force-systemd-flag-652506'/>
	I0910 18:43:19.615988   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <model type='virtio'/>
	I0910 18:43:19.616001   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </interface>
	I0910 18:43:19.616012   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <interface type='network'>
	I0910 18:43:19.616020   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <source network='default'/>
	I0910 18:43:19.616028   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <model type='virtio'/>
	I0910 18:43:19.616034   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </interface>
	I0910 18:43:19.616042   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <serial type='pty'>
	I0910 18:43:19.616048   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <target port='0'/>
	I0910 18:43:19.616057   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </serial>
	I0910 18:43:19.616067   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <console type='pty'>
	I0910 18:43:19.616082   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <target type='serial' port='0'/>
	I0910 18:43:19.616094   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </console>
	I0910 18:43:19.616103   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <rng model='virtio'>
	I0910 18:43:19.616112   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <backend model='random'>/dev/random</backend>
	I0910 18:43:19.616119   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </rng>
	I0910 18:43:19.616124   53892 main.go:141] libmachine: (force-systemd-flag-652506)     
	I0910 18:43:19.616131   53892 main.go:141] libmachine: (force-systemd-flag-652506)     
	I0910 18:43:19.616139   53892 main.go:141] libmachine: (force-systemd-flag-652506)   </devices>
	I0910 18:43:19.616149   53892 main.go:141] libmachine: (force-systemd-flag-652506) </domain>
	I0910 18:43:19.616167   53892 main.go:141] libmachine: (force-systemd-flag-652506) 
	I0910 18:43:19.620152   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:b1:50:90 in network default
	I0910 18:43:19.620769   53892 main.go:141] libmachine: (force-systemd-flag-652506) Ensuring networks are active...
	I0910 18:43:19.620784   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:19.621523   53892 main.go:141] libmachine: (force-systemd-flag-652506) Ensuring network default is active
	I0910 18:43:19.621904   53892 main.go:141] libmachine: (force-systemd-flag-652506) Ensuring network mk-force-systemd-flag-652506 is active
	I0910 18:43:19.622483   53892 main.go:141] libmachine: (force-systemd-flag-652506) Getting domain xml...
	I0910 18:43:19.623214   53892 main.go:141] libmachine: (force-systemd-flag-652506) Creating domain...
	I0910 18:43:20.923962   53892 main.go:141] libmachine: (force-systemd-flag-652506) Waiting to get IP...
	I0910 18:43:20.924875   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:20.925336   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:20.925399   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:20.925331   53915 retry.go:31] will retry after 205.496468ms: waiting for machine to come up
	I0910 18:43:21.132977   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:21.133446   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:21.133474   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:21.133401   53915 retry.go:31] will retry after 387.061178ms: waiting for machine to come up
	I0910 18:43:21.521385   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:21.521934   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:21.521961   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:21.521887   53915 retry.go:31] will retry after 416.131049ms: waiting for machine to come up
	I0910 18:43:21.939432   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:21.939844   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:21.939872   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:21.939796   53915 retry.go:31] will retry after 538.332525ms: waiting for machine to come up
	I0910 18:43:22.480835   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:22.481356   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:22.481385   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:22.481326   53915 retry.go:31] will retry after 639.986264ms: waiting for machine to come up
	I0910 18:43:23.122681   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:23.123136   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:23.123163   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:23.123102   53915 retry.go:31] will retry after 830.641931ms: waiting for machine to come up
	I0910 18:43:23.954898   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:23.955315   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:23.955336   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:23.955271   53915 retry.go:31] will retry after 1.045376309s: waiting for machine to come up
	I0910 18:43:22.914092   53190 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:43:22.930556   53190 node_ready.go:35] waiting up to 6m0s for node "pause-459729" to be "Ready" ...
	I0910 18:43:22.934090   53190 node_ready.go:49] node "pause-459729" has status "Ready":"True"
	I0910 18:43:22.934118   53190 node_ready.go:38] duration metric: took 3.524785ms for node "pause-459729" to be "Ready" ...
	I0910 18:43:22.934130   53190 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:43:22.939271   53190 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t7nl7" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.120589   53190 pod_ready.go:93] pod "coredns-6f6b679f8f-t7nl7" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:23.120612   53190 pod_ready.go:82] duration metric: took 181.316344ms for pod "coredns-6f6b679f8f-t7nl7" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.120626   53190 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.520348   53190 pod_ready.go:93] pod "etcd-pause-459729" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:23.520376   53190 pod_ready.go:82] duration metric: took 399.741261ms for pod "etcd-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.520390   53190 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.920901   53190 pod_ready.go:93] pod "kube-apiserver-pause-459729" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:23.920932   53190 pod_ready.go:82] duration metric: took 400.532528ms for pod "kube-apiserver-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.920955   53190 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:24.320567   53190 pod_ready.go:93] pod "kube-controller-manager-pause-459729" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:24.320592   53190 pod_ready.go:82] duration metric: took 399.627292ms for pod "kube-controller-manager-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:24.320605   53190 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6f9ft" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:24.721454   53190 pod_ready.go:93] pod "kube-proxy-6f9ft" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:24.721480   53190 pod_ready.go:82] duration metric: took 400.866721ms for pod "kube-proxy-6f9ft" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:24.721495   53190 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:25.121598   53190 pod_ready.go:93] pod "kube-scheduler-pause-459729" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:25.121628   53190 pod_ready.go:82] duration metric: took 400.123565ms for pod "kube-scheduler-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:25.121640   53190 pod_ready.go:39] duration metric: took 2.187495281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:43:25.121659   53190 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:43:25.121723   53190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:43:25.136420   53190 api_server.go:72] duration metric: took 2.384799877s to wait for apiserver process to appear ...
	I0910 18:43:25.136447   53190 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:43:25.136466   53190 api_server.go:253] Checking apiserver healthz at https://192.168.50.99:8443/healthz ...
	I0910 18:43:25.140623   53190 api_server.go:279] https://192.168.50.99:8443/healthz returned 200:
	ok
	I0910 18:43:25.141537   53190 api_server.go:141] control plane version: v1.31.0
	I0910 18:43:25.141559   53190 api_server.go:131] duration metric: took 5.10443ms to wait for apiserver health ...
	I0910 18:43:25.141566   53190 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:43:25.322817   53190 system_pods.go:59] 6 kube-system pods found
	I0910 18:43:25.322843   53190 system_pods.go:61] "coredns-6f6b679f8f-t7nl7" [45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6] Running
	I0910 18:43:25.322848   53190 system_pods.go:61] "etcd-pause-459729" [575ecaed-a208-4979-9016-0ec5307f8281] Running
	I0910 18:43:25.322853   53190 system_pods.go:61] "kube-apiserver-pause-459729" [5d48506b-3318-4092-8a9a-84310b428505] Running
	I0910 18:43:25.322857   53190 system_pods.go:61] "kube-controller-manager-pause-459729" [4ba3f5a4-6822-4d82-9e25-40d8ef7d7ae1] Running
	I0910 18:43:25.322861   53190 system_pods.go:61] "kube-proxy-6f9ft" [d3e991db-6bfb-4ffe-bc82-d1533f41844b] Running
	I0910 18:43:25.322865   53190 system_pods.go:61] "kube-scheduler-pause-459729" [c07ab313-5e5a-4d03-82d6-85ce883af3e2] Running
	I0910 18:43:25.322872   53190 system_pods.go:74] duration metric: took 181.299266ms to wait for pod list to return data ...
	I0910 18:43:25.322878   53190 default_sa.go:34] waiting for default service account to be created ...
	I0910 18:43:25.520731   53190 default_sa.go:45] found service account: "default"
	I0910 18:43:25.520759   53190 default_sa.go:55] duration metric: took 197.875693ms for default service account to be created ...
	I0910 18:43:25.520769   53190 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 18:43:25.723361   53190 system_pods.go:86] 6 kube-system pods found
	I0910 18:43:25.723389   53190 system_pods.go:89] "coredns-6f6b679f8f-t7nl7" [45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6] Running
	I0910 18:43:25.723394   53190 system_pods.go:89] "etcd-pause-459729" [575ecaed-a208-4979-9016-0ec5307f8281] Running
	I0910 18:43:25.723398   53190 system_pods.go:89] "kube-apiserver-pause-459729" [5d48506b-3318-4092-8a9a-84310b428505] Running
	I0910 18:43:25.723402   53190 system_pods.go:89] "kube-controller-manager-pause-459729" [4ba3f5a4-6822-4d82-9e25-40d8ef7d7ae1] Running
	I0910 18:43:25.723405   53190 system_pods.go:89] "kube-proxy-6f9ft" [d3e991db-6bfb-4ffe-bc82-d1533f41844b] Running
	I0910 18:43:25.723408   53190 system_pods.go:89] "kube-scheduler-pause-459729" [c07ab313-5e5a-4d03-82d6-85ce883af3e2] Running
	I0910 18:43:25.723414   53190 system_pods.go:126] duration metric: took 202.640786ms to wait for k8s-apps to be running ...
	I0910 18:43:25.723421   53190 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 18:43:25.723461   53190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:43:25.738880   53190 system_svc.go:56] duration metric: took 15.451999ms WaitForService to wait for kubelet
	I0910 18:43:25.738906   53190 kubeadm.go:582] duration metric: took 2.987292613s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:43:25.738922   53190 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:43:25.920381   53190 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:43:25.920405   53190 node_conditions.go:123] node cpu capacity is 2
	I0910 18:43:25.920416   53190 node_conditions.go:105] duration metric: took 181.489217ms to run NodePressure ...
	I0910 18:43:25.920427   53190 start.go:241] waiting for startup goroutines ...
	I0910 18:43:25.920434   53190 start.go:246] waiting for cluster config update ...
	I0910 18:43:25.920440   53190 start.go:255] writing updated cluster config ...
	I0910 18:43:25.920718   53190 ssh_runner.go:195] Run: rm -f paused
	I0910 18:43:25.966709   53190 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 18:43:25.968693   53190 out.go:177] * Done! kubectl is now configured to use "pause-459729" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.605247246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993806605222062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=267bc274-cc41-48af-bfc7-fd92f8f33b06 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.606376236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2541278-001e-4708-bd27-2d0bb9cb7ed7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.606495398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2541278-001e-4708-bd27-2d0bb9cb7ed7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.606861751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993785824068433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993785844450133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993785811071051,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993773655203101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c,PodSandboxId:05f0f1bc6c34e0b367f42f35ec596807fc49984d0f3347ec57c2acf000e432f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993763296068506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02,PodSandboxId:bcb0074ddaf363c84305cd17d03d6bb776f120d08a48bc59959e82d49aa5aff1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993762669165249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993762511439925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993762473505516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993762432097415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993762399919336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12,PodSandboxId:b8e1ea111f3c6c3342d44022a0e4bb47716197ca7abdd3ec87898c79c0d65e35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993715849204907,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03,PodSandboxId:42082ba41e76d8021fb2a7fc27b8c0614c18fccad25c7363a522647e80db058d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993715778027536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2541278-001e-4708-bd27-2d0bb9cb7ed7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.648405474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cbede44c-db9e-4cf2-ad71-edd68f8f35fc name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.648479403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbede44c-db9e-4cf2-ad71-edd68f8f35fc name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.649479930Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0fe5b0f-b810-4195-b10f-f3044cef4f0c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.650152955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993806650128925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0fe5b0f-b810-4195-b10f-f3044cef4f0c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.650717149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10067fe0-10c7-4fa8-9a49-d4b2673e5f94 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.650845500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10067fe0-10c7-4fa8-9a49-d4b2673e5f94 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.651082153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993785824068433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993785844450133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993785811071051,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993773655203101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c,PodSandboxId:05f0f1bc6c34e0b367f42f35ec596807fc49984d0f3347ec57c2acf000e432f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993763296068506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02,PodSandboxId:bcb0074ddaf363c84305cd17d03d6bb776f120d08a48bc59959e82d49aa5aff1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993762669165249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993762511439925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993762473505516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993762432097415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993762399919336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12,PodSandboxId:b8e1ea111f3c6c3342d44022a0e4bb47716197ca7abdd3ec87898c79c0d65e35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993715849204907,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03,PodSandboxId:42082ba41e76d8021fb2a7fc27b8c0614c18fccad25c7363a522647e80db058d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993715778027536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10067fe0-10c7-4fa8-9a49-d4b2673e5f94 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.691992368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=368e4ad5-b01a-420b-8f88-f716cfa1d3da name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.692064312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=368e4ad5-b01a-420b-8f88-f716cfa1d3da name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.693086507Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=107a0510-aecf-4c98-b3ee-11fe0f0c6571 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.693739777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993806693714513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=107a0510-aecf-4c98-b3ee-11fe0f0c6571 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.694409307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b287c5b8-5dfd-4e98-b5c3-8bbc17c507ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.694462750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b287c5b8-5dfd-4e98-b5c3-8bbc17c507ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.694770471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993785824068433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993785844450133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993785811071051,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993773655203101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c,PodSandboxId:05f0f1bc6c34e0b367f42f35ec596807fc49984d0f3347ec57c2acf000e432f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993763296068506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02,PodSandboxId:bcb0074ddaf363c84305cd17d03d6bb776f120d08a48bc59959e82d49aa5aff1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993762669165249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993762511439925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993762473505516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993762432097415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993762399919336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12,PodSandboxId:b8e1ea111f3c6c3342d44022a0e4bb47716197ca7abdd3ec87898c79c0d65e35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993715849204907,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03,PodSandboxId:42082ba41e76d8021fb2a7fc27b8c0614c18fccad25c7363a522647e80db058d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993715778027536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b287c5b8-5dfd-4e98-b5c3-8bbc17c507ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.735014783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a40beda-9a05-4811-b0ea-0880e072729a name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.735093874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a40beda-9a05-4811-b0ea-0880e072729a name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.736336711Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc9cbc08-d6c5-4d28-985d-9f16defaa166 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.736846839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993806736821906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc9cbc08-d6c5-4d28-985d-9f16defaa166 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.737270246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3043e14c-d1c6-4e63-8f74-ffddd136dfc7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.737323601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3043e14c-d1c6-4e63-8f74-ffddd136dfc7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:26 pause-459729 crio[2299]: time="2024-09-10 18:43:26.737639117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993785824068433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993785844450133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993785811071051,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993773655203101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c,PodSandboxId:05f0f1bc6c34e0b367f42f35ec596807fc49984d0f3347ec57c2acf000e432f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993763296068506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02,PodSandboxId:bcb0074ddaf363c84305cd17d03d6bb776f120d08a48bc59959e82d49aa5aff1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993762669165249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993762511439925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993762473505516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993762432097415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993762399919336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12,PodSandboxId:b8e1ea111f3c6c3342d44022a0e4bb47716197ca7abdd3ec87898c79c0d65e35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993715849204907,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03,PodSandboxId:42082ba41e76d8021fb2a7fc27b8c0614c18fccad25c7363a522647e80db058d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993715778027536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3043e14c-d1c6-4e63-8f74-ffddd136dfc7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e0ec9808cc401       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   20 seconds ago       Running             kube-controller-manager   2                   e6fadd9a4845f       kube-controller-manager-pause-459729
	6b866e4cc28dd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   20 seconds ago       Running             kube-scheduler            2                   4b1b11b89be19       kube-scheduler-pause-459729
	732d62087adce       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   20 seconds ago       Running             kube-apiserver            2                   def0ff0bbe3e7       kube-apiserver-pause-459729
	0c9ab82089fc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   33 seconds ago       Running             etcd                      2                   474db1987e972       etcd-pause-459729
	400ae9ffe07da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago       Running             coredns                   1                   05f0f1bc6c34e       coredns-6f6b679f8f-t7nl7
	a44ca5d97d3ae       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   44 seconds ago       Running             kube-proxy                1                   bcb0074ddaf36       kube-proxy-6f9ft
	616b2c77b30d4       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   44 seconds ago       Exited              kube-controller-manager   1                   e6fadd9a4845f       kube-controller-manager-pause-459729
	887b204becab4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   44 seconds ago       Exited              etcd                      1                   474db1987e972       etcd-pause-459729
	e7296148ec9e1       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   44 seconds ago       Exited              kube-scheduler            1                   4b1b11b89be19       kube-scheduler-pause-459729
	d8b4bdfe79b76       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   44 seconds ago       Exited              kube-apiserver            1                   def0ff0bbe3e7       kube-apiserver-pause-459729
	28c554d2f7dee       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   b8e1ea111f3c6       kube-proxy-6f9ft
	9f2773d9050f9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   42082ba41e76d       coredns-6f6b679f8f-t7nl7
	
	
	==> coredns [400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57810 - 35147 "HINFO IN 8656652298870847717.8343541974218492693. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010495775s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1735866020]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:42:43.504) (total time: 10001ms):
	Trace[1735866020]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:53.505)
	Trace[1735866020]: [10.001375551s] [10.001375551s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1126934098]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:42:43.503) (total time: 10002ms):
	Trace[1126934098]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:42:53.505)
	Trace[1126934098]: [10.002353503s] [10.002353503s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[373000815]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:42:43.504) (total time: 10001ms):
	Trace[373000815]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:53.505)
	Trace[373000815]: [10.001451296s] [10.001451296s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[621585235]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:41:56.111) (total time: 28748ms):
	Trace[621585235]: ---"Objects listed" error:<nil> 28748ms (18:42:24.859)
	Trace[621585235]: [28.748792278s] [28.748792278s] END
	[INFO] plugin/kubernetes: Trace[203482088]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:41:56.109) (total time: 28750ms):
	Trace[203482088]: ---"Objects listed" error:<nil> 28750ms (18:42:24.860)
	Trace[203482088]: [28.750366005s] [28.750366005s] END
	[INFO] plugin/kubernetes: Trace[1264046127]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:41:56.111) (total time: 28749ms):
	Trace[1264046127]: ---"Objects listed" error:<nil> 28749ms (18:42:24.860)
	Trace[1264046127]: [28.749417038s] [28.749417038s] END
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37650 - 178 "HINFO IN 7572563276532883976.1939927841076177656. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009368328s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-459729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-459729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=pause-459729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_41_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:41:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-459729
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:43:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:43:08 +0000   Tue, 10 Sep 2024 18:41:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:43:08 +0000   Tue, 10 Sep 2024 18:41:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:43:08 +0000   Tue, 10 Sep 2024 18:41:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:43:08 +0000   Tue, 10 Sep 2024 18:41:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.99
	  Hostname:    pause-459729
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1eff2ee73484525963ca3613f35c68e
	  System UUID:                e1eff2ee-7348-4525-963c-a3613f35c68e
	  Boot ID:                    8607bc42-0597-406d-af20-406002aaa270
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-t7nl7                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     93s
	  kube-system                 etcd-pause-459729                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         99s
	  kube-system                 kube-apiserver-pause-459729             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-pause-459729    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-6f9ft                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-pause-459729             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 90s                kube-proxy       
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node pause-459729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node pause-459729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                kubelet          Node pause-459729 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeReady                97s                kubelet          Node pause-459729 status is now: NodeReady
	  Normal  RegisteredNode           94s                node-controller  Node pause-459729 event: Registered Node pause-459729 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-459729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-459729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-459729 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-459729 event: Registered Node pause-459729 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.261497] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.086710] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074657] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.238765] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.168642] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.379200] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +4.229102] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[  +0.069476] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.266396] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +6.563568] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.091635] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.343663] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[  +0.150839] kauditd_printk_skb: 18 callbacks suppressed
	[Sep10 18:42] kauditd_printk_skb: 97 callbacks suppressed
	[ +33.816879] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[  +0.151690] systemd-fstab-generator[2235]: Ignoring "noauto" option for root device
	[  +0.183951] systemd-fstab-generator[2249]: Ignoring "noauto" option for root device
	[  +0.149466] systemd-fstab-generator[2261]: Ignoring "noauto" option for root device
	[  +0.304783] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.697376] systemd-fstab-generator[2411]: Ignoring "noauto" option for root device
	[  +8.323407] kauditd_printk_skb: 197 callbacks suppressed
	[Sep10 18:43] systemd-fstab-generator[3303]: Ignoring "noauto" option for root device
	[  +8.286267] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.476277] systemd-fstab-generator[3540]: Ignoring "noauto" option for root device
	
	
	==> etcd [0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9] <==
	{"level":"info","ts":"2024-09-10T18:42:53.798952Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.99:2380"}
	{"level":"info","ts":"2024-09-10T18:42:53.797823Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:53.799027Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:53.799055Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:53.798154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 switched to configuration voters=(3860576149566711699)"}
	{"level":"info","ts":"2024-09-10T18:42:53.797717Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-10T18:42:53.799195Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","added-peer-id":"3593858dc744e793","added-peer-peer-urls":["https://192.168.50.99:2380"]}
	{"level":"info","ts":"2024-09-10T18:42:53.799391Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:42:53.799443Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:42:55.385411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-10T18:42:55.385506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T18:42:55.385623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 received MsgPreVoteResp from 3593858dc744e793 at term 2"}
	{"level":"info","ts":"2024-09-10T18:42:55.385653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T18:42:55.385661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 received MsgVoteResp from 3593858dc744e793 at term 3"}
	{"level":"info","ts":"2024-09-10T18:42:55.385675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 became leader at term 3"}
	{"level":"info","ts":"2024-09-10T18:42:55.385695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3593858dc744e793 elected leader 3593858dc744e793 at term 3"}
	{"level":"info","ts":"2024-09-10T18:42:55.391115Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3593858dc744e793","local-member-attributes":"{Name:pause-459729 ClientURLs:[https://192.168.50.99:2379]}","request-path":"/0/members/3593858dc744e793/attributes","cluster-id":"be49f23f45b186b5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:42:55.391133Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:42:55.391433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:42:55.391456Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:42:55.391482Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:42:55.392670Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:42:55.392744Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:42:55.393849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T18:42:55.393849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.99:2379"}
	
	
	==> etcd [887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154] <==
	{"level":"info","ts":"2024-09-10T18:42:43.042116Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-10T18:42:43.089797Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","commit-index":419}
	{"level":"info","ts":"2024-09-10T18:42:43.090159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-10T18:42:43.095953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 became follower at term 2"}
	{"level":"info","ts":"2024-09-10T18:42:43.096637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 3593858dc744e793 [peers: [], term: 2, commit: 419, applied: 0, lastindex: 419, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-10T18:42:43.100858Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-10T18:42:43.106624Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":400}
	{"level":"info","ts":"2024-09-10T18:42:43.111962Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-10T18:42:43.127467Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"3593858dc744e793","timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:42:43.130121Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"3593858dc744e793"}
	{"level":"info","ts":"2024-09-10T18:42:43.130201Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"3593858dc744e793","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-10T18:42:43.130751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:42:43.130932Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-10T18:42:43.131068Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:43.134743Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:43.134782Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:43.131304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 switched to configuration voters=(3860576149566711699)"}
	{"level":"info","ts":"2024-09-10T18:42:43.134925Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","added-peer-id":"3593858dc744e793","added-peer-peer-urls":["https://192.168.50.99:2380"]}
	{"level":"info","ts":"2024-09-10T18:42:43.135039Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:42:43.135088Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:42:43.137474Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T18:42:43.137708Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.99:2380"}
	{"level":"info","ts":"2024-09-10T18:42:43.137719Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.99:2380"}
	{"level":"info","ts":"2024-09-10T18:42:43.143249Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3593858dc744e793","initial-advertise-peer-urls":["https://192.168.50.99:2380"],"listen-peer-urls":["https://192.168.50.99:2380"],"advertise-client-urls":["https://192.168.50.99:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.99:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T18:42:43.143312Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 18:43:27 up 2 min,  0 users,  load average: 0.56, 0.22, 0.08
	Linux pause-459729 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541] <==
	I0910 18:43:08.767848       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0910 18:43:08.768016       1 aggregator.go:171] initial CRD sync complete...
	I0910 18:43:08.768055       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 18:43:08.768080       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 18:43:08.768102       1 cache.go:39] Caches are synced for autoregister controller
	I0910 18:43:08.776045       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:43:08.776112       1 policy_source.go:224] refreshing policies
	I0910 18:43:08.794855       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 18:43:08.797290       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 18:43:08.797974       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0910 18:43:08.798001       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 18:43:08.799038       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 18:43:08.799421       1 shared_informer.go:320] Caches are synced for configmaps
	I0910 18:43:08.799506       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 18:43:08.799842       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 18:43:08.807308       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0910 18:43:09.602721       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0910 18:43:09.813061       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.99]
	I0910 18:43:09.814147       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:43:09.819365       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 18:43:10.019207       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:43:10.030961       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:43:10.074870       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:43:10.110098       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 18:43:10.117357       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe] <==
	I0910 18:43:01.717171       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0910 18:43:01.717214       1 controller.go:132] Ending legacy_token_tracking_controller
	I0910 18:43:01.717220       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0910 18:43:01.717237       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0910 18:43:01.717248       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0910 18:43:01.717802       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0910 18:43:01.717959       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0910 18:43:01.718435       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0910 18:43:01.718477       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0910 18:43:01.718506       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0910 18:43:01.718636       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0910 18:43:01.720441       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0910 18:43:01.720685       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0910 18:43:01.723646       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0910 18:43:01.725664       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0910 18:43:01.749619       1 controller.go:157] Shutting down quota evaluator
	I0910 18:43:01.750067       1 controller.go:176] quota evaluator worker shutdown
	I0910 18:43:01.750114       1 controller.go:176] quota evaluator worker shutdown
	I0910 18:43:01.750125       1 controller.go:176] quota evaluator worker shutdown
	I0910 18:43:01.750133       1 controller.go:176] quota evaluator worker shutdown
	I0910 18:43:01.750140       1 controller.go:176] quota evaluator worker shutdown
	E0910 18:43:02.379960       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:43:02.380297       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0910 18:43:03.380218       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0910 18:43:03.380644       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a] <==
	
	
	==> kube-controller-manager [e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3] <==
	I0910 18:43:12.047153       1 shared_informer.go:320] Caches are synced for disruption
	I0910 18:43:12.048322       1 shared_informer.go:320] Caches are synced for cronjob
	I0910 18:43:12.050618       1 shared_informer.go:320] Caches are synced for TTL
	I0910 18:43:12.063917       1 shared_informer.go:320] Caches are synced for node
	I0910 18:43:12.064002       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0910 18:43:12.064024       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0910 18:43:12.064046       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0910 18:43:12.064053       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0910 18:43:12.064124       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-459729"
	I0910 18:43:12.069578       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0910 18:43:12.086034       1 shared_informer.go:320] Caches are synced for endpoint
	I0910 18:43:12.136040       1 shared_informer.go:320] Caches are synced for attach detach
	I0910 18:43:12.202086       1 shared_informer.go:320] Caches are synced for taint
	I0910 18:43:12.202308       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0910 18:43:12.202456       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-459729"
	I0910 18:43:12.202610       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0910 18:43:12.241351       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 18:43:12.286501       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 18:43:12.670919       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 18:43:12.735402       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 18:43:12.735446       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0910 18:43:15.772985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="37.731468ms"
	I0910 18:43:15.773106       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.45µs"
	I0910 18:43:15.794857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="20.614211ms"
	I0910 18:43:15.795087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="103.774µs"
	
	
	==> kube-proxy [28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:41:56.165158       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:41:56.175071       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.99"]
	E0910 18:41:56.175160       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:41:56.219298       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:41:56.219347       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:41:56.219387       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:41:56.223062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:41:56.223319       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:41:56.223330       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:41:56.226208       1 config.go:197] "Starting service config controller"
	I0910 18:41:56.226254       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:41:56.227930       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:41:56.227962       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:41:56.228497       1 config.go:326] "Starting node config controller"
	I0910 18:41:56.232786       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:41:56.326334       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:41:56.328573       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:41:56.335109       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02] <==
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:42:53.734823       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-459729\": net/http: TLS handshake timeout"
	I0910 18:43:01.602973       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.99"]
	E0910 18:43:01.603310       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:43:01.646869       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:43:01.646943       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:43:01.646977       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:43:01.650362       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:43:01.650809       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:43:01.650851       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:43:01.653685       1 config.go:197] "Starting service config controller"
	I0910 18:43:01.653749       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:43:01.653788       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:43:01.653814       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:43:01.655125       1 config.go:326] "Starting node config controller"
	I0910 18:43:01.655226       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:43:01.754617       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:43:01.754689       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:43:01.756046       1 shared_informer.go:320] Caches are synced for node config
	E0910 18:43:08.735821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)" logger="UnhandledError"
	E0910 18:43:08.735993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0910 18:43:08.736102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	
	
	==> kube-scheduler [6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6] <==
	I0910 18:43:07.411009       1 serving.go:386] Generated self-signed cert in-memory
	W0910 18:43:08.689903       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 18:43:08.690126       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 18:43:08.690165       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 18:43:08.690314       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 18:43:08.738994       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 18:43:08.744605       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:43:08.753484       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 18:43:08.754732       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 18:43:08.755820       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 18:43:08.754763       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 18:43:08.857070       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223] <==
	I0910 18:42:43.837228       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.533969    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7619d7b95fd844b416116d997fb3ebe3-flexvolume-dir\") pod \"kube-controller-manager-pause-459729\" (UID: \"7619d7b95fd844b416116d997fb3ebe3\") " pod="kube-system/kube-controller-manager-pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.533989    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7619d7b95fd844b416116d997fb3ebe3-kubeconfig\") pod \"kube-controller-manager-pause-459729\" (UID: \"7619d7b95fd844b416116d997fb3ebe3\") " pod="kube-system/kube-controller-manager-pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.534017    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7619d7b95fd844b416116d997fb3ebe3-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-459729\" (UID: \"7619d7b95fd844b416116d997fb3ebe3\") " pod="kube-system/kube-controller-manager-pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: E0910 18:43:05.535210    3310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-459729?timeout=10s\": dial tcp 192.168.50.99:8443: connect: connection refused" interval="400ms"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.700003    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: E0910 18:43:05.701156    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.99:8443: connect: connection refused" node="pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.786100    3310 scope.go:117] "RemoveContainer" containerID="d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.787437    3310 scope.go:117] "RemoveContainer" containerID="616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.788743    3310 scope.go:117] "RemoveContainer" containerID="e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: E0910 18:43:05.937496    3310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-459729?timeout=10s\": dial tcp 192.168.50.99:8443: connect: connection refused" interval="800ms"
	Sep 10 18:43:06 pause-459729 kubelet[3310]: I0910 18:43:06.102670    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-459729"
	Sep 10 18:43:06 pause-459729 kubelet[3310]: E0910 18:43:06.103787    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.99:8443: connect: connection refused" node="pause-459729"
	Sep 10 18:43:06 pause-459729 kubelet[3310]: I0910 18:43:06.905603    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-459729"
	Sep 10 18:43:08 pause-459729 kubelet[3310]: I0910 18:43:08.878929    3310 kubelet_node_status.go:111] "Node was previously registered" node="pause-459729"
	Sep 10 18:43:08 pause-459729 kubelet[3310]: I0910 18:43:08.879027    3310 kubelet_node_status.go:75] "Successfully registered node" node="pause-459729"
	Sep 10 18:43:08 pause-459729 kubelet[3310]: I0910 18:43:08.879048    3310 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 10 18:43:08 pause-459729 kubelet[3310]: I0910 18:43:08.880008    3310 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 10 18:43:09 pause-459729 kubelet[3310]: I0910 18:43:09.308406    3310 apiserver.go:52] "Watching apiserver"
	Sep 10 18:43:09 pause-459729 kubelet[3310]: I0910 18:43:09.329243    3310 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 10 18:43:09 pause-459729 kubelet[3310]: I0910 18:43:09.428657    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3e991db-6bfb-4ffe-bc82-d1533f41844b-lib-modules\") pod \"kube-proxy-6f9ft\" (UID: \"d3e991db-6bfb-4ffe-bc82-d1533f41844b\") " pod="kube-system/kube-proxy-6f9ft"
	Sep 10 18:43:09 pause-459729 kubelet[3310]: I0910 18:43:09.428729    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3e991db-6bfb-4ffe-bc82-d1533f41844b-xtables-lock\") pod \"kube-proxy-6f9ft\" (UID: \"d3e991db-6bfb-4ffe-bc82-d1533f41844b\") " pod="kube-system/kube-proxy-6f9ft"
	Sep 10 18:43:15 pause-459729 kubelet[3310]: E0910 18:43:15.402337    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993795401990823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:43:15 pause-459729 kubelet[3310]: E0910 18:43:15.402701    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993795401990823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:43:25 pause-459729 kubelet[3310]: E0910 18:43:25.404078    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993805403863631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:43:25 pause-459729 kubelet[3310]: E0910 18:43:25.404125    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993805403863631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-459729 -n pause-459729
helpers_test.go:261: (dbg) Run:  kubectl --context pause-459729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-459729 -n pause-459729
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-459729 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-459729 logs -n 25: (1.426062492s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-642043 sudo cat      | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | /etc/containerd/config.toml    |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo          | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | containerd config dump         |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo          | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | systemctl status crio --all    |                           |         |         |                     |                     |
	|         | --full --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo          | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | systemctl cat crio --no-pager  |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo find     | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | /etc/crio -type f -exec sh -c  |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |         |         |                     |                     |
	| ssh     | -p cilium-642043 sudo crio     | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC |                     |
	|         | config                         |                           |         |         |                     |                     |
	| delete  | -p cilium-642043               | cilium-642043             | jenkins | v1.34.0 | 10 Sep 24 18:39 UTC | 10 Sep 24 18:39 UTC |
	| start   | -p running-upgrade-926585      | minikube                  | jenkins | v1.26.0 | 10 Sep 24 18:39 UTC | 10 Sep 24 18:41 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-358325 stop    | minikube                  | jenkins | v1.26.0 | 10 Sep 24 18:40 UTC | 10 Sep 24 18:40 UTC |
	| start   | -p stopped-upgrade-358325      | stopped-upgrade-358325    | jenkins | v1.34.0 | 10 Sep 24 18:40 UTC | 10 Sep 24 18:41 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-174877         | offline-crio-174877       | jenkins | v1.34.0 | 10 Sep 24 18:40 UTC | 10 Sep 24 18:40 UTC |
	| start   | -p pause-459729 --memory=2048  | pause-459729              | jenkins | v1.34.0 | 10 Sep 24 18:40 UTC | 10 Sep 24 18:42 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-926585      | running-upgrade-926585    | jenkins | v1.34.0 | 10 Sep 24 18:41 UTC | 10 Sep 24 18:43 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-358325      | stopped-upgrade-358325    | jenkins | v1.34.0 | 10 Sep 24 18:41 UTC | 10 Sep 24 18:41 UTC |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:41 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:41 UTC | 10 Sep 24 18:42 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-459729                | pause-459729              | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:43 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:42 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:42 UTC |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:42 UTC | 10 Sep 24 18:43 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-926585      | running-upgrade-926585    | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	| start   | -p force-systemd-flag-652506   | force-systemd-flag-652506 | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-229565 sudo    | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC | 10 Sep 24 18:43 UTC |
	| start   | -p NoKubernetes-229565         | NoKubernetes-229565       | jenkins | v1.34.0 | 10 Sep 24 18:43 UTC |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:43:23
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:43:23.102113   54227 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:43:23.102207   54227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:43:23.102211   54227 out.go:358] Setting ErrFile to fd 2...
	I0910 18:43:23.102214   54227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:43:23.102378   54227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:43:23.102852   54227 out.go:352] Setting JSON to false
	I0910 18:43:23.103860   54227 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5155,"bootTime":1725988648,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:43:23.103910   54227 start.go:139] virtualization: kvm guest
	I0910 18:43:23.105771   54227 out.go:177] * [NoKubernetes-229565] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:43:23.106991   54227 notify.go:220] Checking for updates...
	I0910 18:43:23.106999   54227 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:43:23.108537   54227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:43:23.109983   54227 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:43:23.111002   54227 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:43:23.112061   54227 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:43:23.113104   54227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:43:23.114612   54227 config.go:182] Loaded profile config "NoKubernetes-229565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0910 18:43:23.115183   54227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:43:23.115259   54227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:43:23.134452   54227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I0910 18:43:23.134890   54227 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:43:23.135488   54227 main.go:141] libmachine: Using API Version  1
	I0910 18:43:23.135502   54227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:43:23.135794   54227 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:43:23.135940   54227 main.go:141] libmachine: (NoKubernetes-229565) Calling .DriverName
	I0910 18:43:23.136161   54227 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0910 18:43:23.136173   54227 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:43:23.136445   54227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:43:23.136477   54227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:43:23.150786   54227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0910 18:43:23.151110   54227 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:43:23.151655   54227 main.go:141] libmachine: Using API Version  1
	I0910 18:43:23.151675   54227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:43:23.151957   54227 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:43:23.152124   54227 main.go:141] libmachine: (NoKubernetes-229565) Calling .DriverName
	I0910 18:43:23.188623   54227 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:43:23.189672   54227 start.go:297] selected driver: kvm2
	I0910 18:43:23.189680   54227 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-229565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-229565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.38 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:43:23.189823   54227 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:43:23.190201   54227 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:43:23.190261   54227 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:43:23.205174   54227 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:43:23.206176   54227 cni.go:84] Creating CNI manager for ""
	I0910 18:43:23.206189   54227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:43:23.206268   54227 start.go:340] cluster config:
	{Name:NoKubernetes-229565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-229565 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.38 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:43:23.206426   54227 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:43:23.208244   54227 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-229565
	I0910 18:43:19.213515   53892 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 18:43:19.213691   53892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:43:19.213733   53892 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:43:19.228382   53892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0910 18:43:19.228872   53892 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:43:19.229419   53892 main.go:141] libmachine: Using API Version  1
	I0910 18:43:19.229439   53892 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:43:19.229745   53892 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:43:19.229934   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .GetMachineName
	I0910 18:43:19.230064   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .DriverName
	I0910 18:43:19.230223   53892 start.go:159] libmachine.API.Create for "force-systemd-flag-652506" (driver="kvm2")
	I0910 18:43:19.230252   53892 client.go:168] LocalClient.Create starting
	I0910 18:43:19.230285   53892 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 18:43:19.230330   53892 main.go:141] libmachine: Decoding PEM data...
	I0910 18:43:19.230349   53892 main.go:141] libmachine: Parsing certificate...
	I0910 18:43:19.230417   53892 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 18:43:19.230443   53892 main.go:141] libmachine: Decoding PEM data...
	I0910 18:43:19.230462   53892 main.go:141] libmachine: Parsing certificate...
	I0910 18:43:19.230486   53892 main.go:141] libmachine: Running pre-create checks...
	I0910 18:43:19.230504   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .PreCreateCheck
	I0910 18:43:19.230828   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .GetConfigRaw
	I0910 18:43:19.231188   53892 main.go:141] libmachine: Creating machine...
	I0910 18:43:19.231201   53892 main.go:141] libmachine: (force-systemd-flag-652506) Calling .Create
	I0910 18:43:19.231322   53892 main.go:141] libmachine: (force-systemd-flag-652506) Creating KVM machine...
	I0910 18:43:19.232500   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | found existing default KVM network
	I0910 18:43:19.233630   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.233471   53915 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e4:ea:60} reservation:<nil>}
	I0910 18:43:19.234450   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.234385   53915 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:42:2b:2c} reservation:<nil>}
	I0910 18:43:19.235493   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.235390   53915 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000308800}
	I0910 18:43:19.235518   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | created network xml: 
	I0910 18:43:19.235529   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | <network>
	I0910 18:43:19.235546   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   <name>mk-force-systemd-flag-652506</name>
	I0910 18:43:19.235563   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   <dns enable='no'/>
	I0910 18:43:19.235573   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   
	I0910 18:43:19.235584   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0910 18:43:19.235595   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |     <dhcp>
	I0910 18:43:19.235610   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0910 18:43:19.235617   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |     </dhcp>
	I0910 18:43:19.235623   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   </ip>
	I0910 18:43:19.235635   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG |   
	I0910 18:43:19.235652   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | </network>
	I0910 18:43:19.235660   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | 
	I0910 18:43:19.240653   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | trying to create private KVM network mk-force-systemd-flag-652506 192.168.61.0/24...
	I0910 18:43:19.313960   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | private KVM network mk-force-systemd-flag-652506 192.168.61.0/24 created
	I0910 18:43:19.314034   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.313951   53915 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:43:19.314058   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506 ...
	I0910 18:43:19.314076   53892 main.go:141] libmachine: (force-systemd-flag-652506) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 18:43:19.314090   53892 main.go:141] libmachine: (force-systemd-flag-652506) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:43:19.555458   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.555328   53915 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506/id_rsa...
	I0910 18:43:19.614201   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.614102   53915 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506/force-systemd-flag-652506.rawdisk...
	I0910 18:43:19.614229   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Writing magic tar header
	I0910 18:43:19.614248   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Writing SSH key tar header
	I0910 18:43:19.614281   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:19.614208   53915 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506 ...
	I0910 18:43:19.614312   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506
	I0910 18:43:19.614367   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 18:43:19.614385   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506 (perms=drwx------)
	I0910 18:43:19.614402   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:43:19.614421   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 18:43:19.614429   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 18:43:19.614439   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home/jenkins
	I0910 18:43:19.614447   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Checking permissions on dir: /home
	I0910 18:43:19.614459   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | Skipping /home - not owner
	I0910 18:43:19.614479   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 18:43:19.614494   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 18:43:19.614511   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 18:43:19.614526   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 18:43:19.614535   53892 main.go:141] libmachine: (force-systemd-flag-652506) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 18:43:19.614542   53892 main.go:141] libmachine: (force-systemd-flag-652506) Creating domain...
	I0910 18:43:19.615540   53892 main.go:141] libmachine: (force-systemd-flag-652506) define libvirt domain using xml: 
	I0910 18:43:19.615572   53892 main.go:141] libmachine: (force-systemd-flag-652506) <domain type='kvm'>
	I0910 18:43:19.615584   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <name>force-systemd-flag-652506</name>
	I0910 18:43:19.615596   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <memory unit='MiB'>2048</memory>
	I0910 18:43:19.615609   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <vcpu>2</vcpu>
	I0910 18:43:19.615620   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <features>
	I0910 18:43:19.615630   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <acpi/>
	I0910 18:43:19.615637   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <apic/>
	I0910 18:43:19.615645   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <pae/>
	I0910 18:43:19.615650   53892 main.go:141] libmachine: (force-systemd-flag-652506)     
	I0910 18:43:19.615656   53892 main.go:141] libmachine: (force-systemd-flag-652506)   </features>
	I0910 18:43:19.615660   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <cpu mode='host-passthrough'>
	I0910 18:43:19.615666   53892 main.go:141] libmachine: (force-systemd-flag-652506)   
	I0910 18:43:19.615671   53892 main.go:141] libmachine: (force-systemd-flag-652506)   </cpu>
	I0910 18:43:19.615692   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <os>
	I0910 18:43:19.615714   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <type>hvm</type>
	I0910 18:43:19.615724   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <boot dev='cdrom'/>
	I0910 18:43:19.615735   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <boot dev='hd'/>
	I0910 18:43:19.615748   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <bootmenu enable='no'/>
	I0910 18:43:19.615759   53892 main.go:141] libmachine: (force-systemd-flag-652506)   </os>
	I0910 18:43:19.615782   53892 main.go:141] libmachine: (force-systemd-flag-652506)   <devices>
	I0910 18:43:19.615796   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <disk type='file' device='cdrom'>
	I0910 18:43:19.615817   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506/boot2docker.iso'/>
	I0910 18:43:19.615827   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <target dev='hdc' bus='scsi'/>
	I0910 18:43:19.615835   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <readonly/>
	I0910 18:43:19.615845   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </disk>
	I0910 18:43:19.615855   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <disk type='file' device='disk'>
	I0910 18:43:19.615872   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 18:43:19.615889   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/force-systemd-flag-652506/force-systemd-flag-652506.rawdisk'/>
	I0910 18:43:19.615901   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <target dev='hda' bus='virtio'/>
	I0910 18:43:19.615913   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </disk>
	I0910 18:43:19.615924   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <interface type='network'>
	I0910 18:43:19.615964   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <source network='mk-force-systemd-flag-652506'/>
	I0910 18:43:19.615988   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <model type='virtio'/>
	I0910 18:43:19.616001   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </interface>
	I0910 18:43:19.616012   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <interface type='network'>
	I0910 18:43:19.616020   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <source network='default'/>
	I0910 18:43:19.616028   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <model type='virtio'/>
	I0910 18:43:19.616034   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </interface>
	I0910 18:43:19.616042   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <serial type='pty'>
	I0910 18:43:19.616048   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <target port='0'/>
	I0910 18:43:19.616057   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </serial>
	I0910 18:43:19.616067   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <console type='pty'>
	I0910 18:43:19.616082   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <target type='serial' port='0'/>
	I0910 18:43:19.616094   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </console>
	I0910 18:43:19.616103   53892 main.go:141] libmachine: (force-systemd-flag-652506)     <rng model='virtio'>
	I0910 18:43:19.616112   53892 main.go:141] libmachine: (force-systemd-flag-652506)       <backend model='random'>/dev/random</backend>
	I0910 18:43:19.616119   53892 main.go:141] libmachine: (force-systemd-flag-652506)     </rng>
	I0910 18:43:19.616124   53892 main.go:141] libmachine: (force-systemd-flag-652506)     
	I0910 18:43:19.616131   53892 main.go:141] libmachine: (force-systemd-flag-652506)     
	I0910 18:43:19.616139   53892 main.go:141] libmachine: (force-systemd-flag-652506)   </devices>
	I0910 18:43:19.616149   53892 main.go:141] libmachine: (force-systemd-flag-652506) </domain>
	I0910 18:43:19.616167   53892 main.go:141] libmachine: (force-systemd-flag-652506) 
	I0910 18:43:19.620152   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:b1:50:90 in network default
	I0910 18:43:19.620769   53892 main.go:141] libmachine: (force-systemd-flag-652506) Ensuring networks are active...
	I0910 18:43:19.620784   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:19.621523   53892 main.go:141] libmachine: (force-systemd-flag-652506) Ensuring network default is active
	I0910 18:43:19.621904   53892 main.go:141] libmachine: (force-systemd-flag-652506) Ensuring network mk-force-systemd-flag-652506 is active
	I0910 18:43:19.622483   53892 main.go:141] libmachine: (force-systemd-flag-652506) Getting domain xml...
	I0910 18:43:19.623214   53892 main.go:141] libmachine: (force-systemd-flag-652506) Creating domain...
	I0910 18:43:20.923962   53892 main.go:141] libmachine: (force-systemd-flag-652506) Waiting to get IP...
	I0910 18:43:20.924875   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:20.925336   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:20.925399   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:20.925331   53915 retry.go:31] will retry after 205.496468ms: waiting for machine to come up
	I0910 18:43:21.132977   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:21.133446   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:21.133474   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:21.133401   53915 retry.go:31] will retry after 387.061178ms: waiting for machine to come up
	I0910 18:43:21.521385   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:21.521934   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:21.521961   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:21.521887   53915 retry.go:31] will retry after 416.131049ms: waiting for machine to come up
	I0910 18:43:21.939432   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:21.939844   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:21.939872   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:21.939796   53915 retry.go:31] will retry after 538.332525ms: waiting for machine to come up
	I0910 18:43:22.480835   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:22.481356   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:22.481385   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:22.481326   53915 retry.go:31] will retry after 639.986264ms: waiting for machine to come up
	I0910 18:43:23.122681   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:23.123136   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:23.123163   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:23.123102   53915 retry.go:31] will retry after 830.641931ms: waiting for machine to come up
	I0910 18:43:23.954898   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | domain force-systemd-flag-652506 has defined MAC address 52:54:00:90:1c:89 in network mk-force-systemd-flag-652506
	I0910 18:43:23.955315   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | unable to find current IP address of domain force-systemd-flag-652506 in network mk-force-systemd-flag-652506
	I0910 18:43:23.955336   53892 main.go:141] libmachine: (force-systemd-flag-652506) DBG | I0910 18:43:23.955271   53915 retry.go:31] will retry after 1.045376309s: waiting for machine to come up
	I0910 18:43:22.914092   53190 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:43:22.930556   53190 node_ready.go:35] waiting up to 6m0s for node "pause-459729" to be "Ready" ...
	I0910 18:43:22.934090   53190 node_ready.go:49] node "pause-459729" has status "Ready":"True"
	I0910 18:43:22.934118   53190 node_ready.go:38] duration metric: took 3.524785ms for node "pause-459729" to be "Ready" ...
	I0910 18:43:22.934130   53190 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:43:22.939271   53190 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t7nl7" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.120589   53190 pod_ready.go:93] pod "coredns-6f6b679f8f-t7nl7" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:23.120612   53190 pod_ready.go:82] duration metric: took 181.316344ms for pod "coredns-6f6b679f8f-t7nl7" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.120626   53190 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.520348   53190 pod_ready.go:93] pod "etcd-pause-459729" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:23.520376   53190 pod_ready.go:82] duration metric: took 399.741261ms for pod "etcd-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.520390   53190 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.920901   53190 pod_ready.go:93] pod "kube-apiserver-pause-459729" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:23.920932   53190 pod_ready.go:82] duration metric: took 400.532528ms for pod "kube-apiserver-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:23.920955   53190 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:24.320567   53190 pod_ready.go:93] pod "kube-controller-manager-pause-459729" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:24.320592   53190 pod_ready.go:82] duration metric: took 399.627292ms for pod "kube-controller-manager-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:24.320605   53190 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6f9ft" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:24.721454   53190 pod_ready.go:93] pod "kube-proxy-6f9ft" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:24.721480   53190 pod_ready.go:82] duration metric: took 400.866721ms for pod "kube-proxy-6f9ft" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:24.721495   53190 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:25.121598   53190 pod_ready.go:93] pod "kube-scheduler-pause-459729" in "kube-system" namespace has status "Ready":"True"
	I0910 18:43:25.121628   53190 pod_ready.go:82] duration metric: took 400.123565ms for pod "kube-scheduler-pause-459729" in "kube-system" namespace to be "Ready" ...
	I0910 18:43:25.121640   53190 pod_ready.go:39] duration metric: took 2.187495281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:43:25.121659   53190 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:43:25.121723   53190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:43:25.136420   53190 api_server.go:72] duration metric: took 2.384799877s to wait for apiserver process to appear ...
	I0910 18:43:25.136447   53190 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:43:25.136466   53190 api_server.go:253] Checking apiserver healthz at https://192.168.50.99:8443/healthz ...
	I0910 18:43:25.140623   53190 api_server.go:279] https://192.168.50.99:8443/healthz returned 200:
	ok
	I0910 18:43:25.141537   53190 api_server.go:141] control plane version: v1.31.0
	I0910 18:43:25.141559   53190 api_server.go:131] duration metric: took 5.10443ms to wait for apiserver health ...
	I0910 18:43:25.141566   53190 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:43:25.322817   53190 system_pods.go:59] 6 kube-system pods found
	I0910 18:43:25.322843   53190 system_pods.go:61] "coredns-6f6b679f8f-t7nl7" [45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6] Running
	I0910 18:43:25.322848   53190 system_pods.go:61] "etcd-pause-459729" [575ecaed-a208-4979-9016-0ec5307f8281] Running
	I0910 18:43:25.322853   53190 system_pods.go:61] "kube-apiserver-pause-459729" [5d48506b-3318-4092-8a9a-84310b428505] Running
	I0910 18:43:25.322857   53190 system_pods.go:61] "kube-controller-manager-pause-459729" [4ba3f5a4-6822-4d82-9e25-40d8ef7d7ae1] Running
	I0910 18:43:25.322861   53190 system_pods.go:61] "kube-proxy-6f9ft" [d3e991db-6bfb-4ffe-bc82-d1533f41844b] Running
	I0910 18:43:25.322865   53190 system_pods.go:61] "kube-scheduler-pause-459729" [c07ab313-5e5a-4d03-82d6-85ce883af3e2] Running
	I0910 18:43:25.322872   53190 system_pods.go:74] duration metric: took 181.299266ms to wait for pod list to return data ...
	I0910 18:43:25.322878   53190 default_sa.go:34] waiting for default service account to be created ...
	I0910 18:43:25.520731   53190 default_sa.go:45] found service account: "default"
	I0910 18:43:25.520759   53190 default_sa.go:55] duration metric: took 197.875693ms for default service account to be created ...
	I0910 18:43:25.520769   53190 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 18:43:25.723361   53190 system_pods.go:86] 6 kube-system pods found
	I0910 18:43:25.723389   53190 system_pods.go:89] "coredns-6f6b679f8f-t7nl7" [45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6] Running
	I0910 18:43:25.723394   53190 system_pods.go:89] "etcd-pause-459729" [575ecaed-a208-4979-9016-0ec5307f8281] Running
	I0910 18:43:25.723398   53190 system_pods.go:89] "kube-apiserver-pause-459729" [5d48506b-3318-4092-8a9a-84310b428505] Running
	I0910 18:43:25.723402   53190 system_pods.go:89] "kube-controller-manager-pause-459729" [4ba3f5a4-6822-4d82-9e25-40d8ef7d7ae1] Running
	I0910 18:43:25.723405   53190 system_pods.go:89] "kube-proxy-6f9ft" [d3e991db-6bfb-4ffe-bc82-d1533f41844b] Running
	I0910 18:43:25.723408   53190 system_pods.go:89] "kube-scheduler-pause-459729" [c07ab313-5e5a-4d03-82d6-85ce883af3e2] Running
	I0910 18:43:25.723414   53190 system_pods.go:126] duration metric: took 202.640786ms to wait for k8s-apps to be running ...
	I0910 18:43:25.723421   53190 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 18:43:25.723461   53190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:43:25.738880   53190 system_svc.go:56] duration metric: took 15.451999ms WaitForService to wait for kubelet
	I0910 18:43:25.738906   53190 kubeadm.go:582] duration metric: took 2.987292613s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:43:25.738922   53190 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:43:25.920381   53190 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:43:25.920405   53190 node_conditions.go:123] node cpu capacity is 2
	I0910 18:43:25.920416   53190 node_conditions.go:105] duration metric: took 181.489217ms to run NodePressure ...
	I0910 18:43:25.920427   53190 start.go:241] waiting for startup goroutines ...
	I0910 18:43:25.920434   53190 start.go:246] waiting for cluster config update ...
	I0910 18:43:25.920440   53190 start.go:255] writing updated cluster config ...
	I0910 18:43:25.920718   53190 ssh_runner.go:195] Run: rm -f paused
	I0910 18:43:25.966709   53190 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 18:43:25.968693   53190 out.go:177] * Done! kubectl is now configured to use "pause-459729" cluster and "default" namespace by default
	I0910 18:43:23.209468   54227 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0910 18:43:23.262008   54227 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0910 18:43:23.262150   54227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/NoKubernetes-229565/config.json ...
	I0910 18:43:23.262403   54227 start.go:360] acquireMachinesLock for NoKubernetes-229565: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.156999954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5ca11be-c107-429e-acda-8a190ef6732c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.157267985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993785824068433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993785844450133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993785811071051,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993773655203101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c,PodSandboxId:05f0f1bc6c34e0b367f42f35ec596807fc49984d0f3347ec57c2acf000e432f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993763296068506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02,PodSandboxId:bcb0074ddaf363c84305cd17d03d6bb776f120d08a48bc59959e82d49aa5aff1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993762669165249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993762511439925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993762473505516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993762432097415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993762399919336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12,PodSandboxId:b8e1ea111f3c6c3342d44022a0e4bb47716197ca7abdd3ec87898c79c0d65e35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993715849204907,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03,PodSandboxId:42082ba41e76d8021fb2a7fc27b8c0614c18fccad25c7363a522647e80db058d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993715778027536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5ca11be-c107-429e-acda-8a190ef6732c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.211988743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf09a261-a2be-49bd-883e-30771c17ccc7 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.212114578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf09a261-a2be-49bd-883e-30771c17ccc7 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.213696865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ea8ff61-8848-47e6-b3ab-145cd3eb486b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.214270615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993809214238989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ea8ff61-8848-47e6-b3ab-145cd3eb486b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.215028627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79e8d56d-4257-49e7-8fbb-fde000b72dae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.215130548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79e8d56d-4257-49e7-8fbb-fde000b72dae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.215488958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993785824068433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993785844450133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993785811071051,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993773655203101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c,PodSandboxId:05f0f1bc6c34e0b367f42f35ec596807fc49984d0f3347ec57c2acf000e432f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993763296068506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02,PodSandboxId:bcb0074ddaf363c84305cd17d03d6bb776f120d08a48bc59959e82d49aa5aff1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993762669165249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993762511439925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993762473505516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993762432097415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993762399919336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12,PodSandboxId:b8e1ea111f3c6c3342d44022a0e4bb47716197ca7abdd3ec87898c79c0d65e35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993715849204907,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03,PodSandboxId:42082ba41e76d8021fb2a7fc27b8c0614c18fccad25c7363a522647e80db058d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993715778027536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79e8d56d-4257-49e7-8fbb-fde000b72dae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.226008130Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=87da11b8-6950-478d-b52f-ee3b308c948e name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.226121647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87da11b8-6950-478d-b52f-ee3b308c948e name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.264669012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08aff942-f3ea-4d65-ab27-b195d224ce18 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.264856221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08aff942-f3ea-4d65-ab27-b195d224ce18 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.266357352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c1b80d9-b46c-44fc-930a-66ec0f7e8737 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.266935186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993809266908645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c1b80d9-b46c-44fc-930a-66ec0f7e8737 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.267659783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e74de827-06cf-40ae-8999-d5e41aeb7376 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.267755109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e74de827-06cf-40ae-8999-d5e41aeb7376 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.268087901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993785824068433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993785844450133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993785811071051,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993773655203101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c,PodSandboxId:05f0f1bc6c34e0b367f42f35ec596807fc49984d0f3347ec57c2acf000e432f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993763296068506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02,PodSandboxId:bcb0074ddaf363c84305cd17d03d6bb776f120d08a48bc59959e82d49aa5aff1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993762669165249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993762511439925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993762473505516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993762432097415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993762399919336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12,PodSandboxId:b8e1ea111f3c6c3342d44022a0e4bb47716197ca7abdd3ec87898c79c0d65e35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993715849204907,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03,PodSandboxId:42082ba41e76d8021fb2a7fc27b8c0614c18fccad25c7363a522647e80db058d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993715778027536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e74de827-06cf-40ae-8999-d5e41aeb7376 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.317998735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89eb9f7f-69bf-4cb5-896f-980826563ce7 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.318093479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89eb9f7f-69bf-4cb5-896f-980826563ce7 name=/runtime.v1.RuntimeService/Version
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.319229789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f6ea6a4-3e48-445b-b1b8-80c757f5f598 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.319688958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993809319656981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f6ea6a4-3e48-445b-b1b8-80c757f5f598 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.320270334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=940caf46-fbf1-49a4-bfcc-67f90d8ea8f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.320337956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=940caf46-fbf1-49a4-bfcc-67f90d8ea8f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 18:43:29 pause-459729 crio[2299]: time="2024-09-10 18:43:29.320650479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725993785824068433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725993785844450133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725993785811071051,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725993773655203101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c,PodSandboxId:05f0f1bc6c34e0b367f42f35ec596807fc49984d0f3347ec57c2acf000e432f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725993763296068506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02,PodSandboxId:bcb0074ddaf363c84305cd17d03d6bb776f120d08a48bc59959e82d49aa5aff1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725993762669165249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a,PodSandboxId:e6fadd9a4845f20f060763c96f73e397290b5ec090c0495c5e654209330cc289,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725993762511439925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7619d7b95fd844b416116d997fb3ebe3,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154,PodSandboxId:474db1987e972b55d3045939c8b1890b3385d68f9c1f9a02fa810fed54246b03,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725993762473505516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbef841f86b1ab53aef52f1309c1f595,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223,PodSandboxId:4b1b11b89be19963c958f83b5d440c2a33ac5436a4384c96acff350ea37784de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725993762432097415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714edffd445c1714a068f98869ebc1c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe,PodSandboxId:def0ff0bbe3e740d038c9be2127898ba44a9a07c22e31f47b0f75a7b7e110910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725993762399919336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-459729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871a9a42d1484218d404311dd3859ae5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12,PodSandboxId:b8e1ea111f3c6c3342d44022a0e4bb47716197ca7abdd3ec87898c79c0d65e35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725993715849204907,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6f9ft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e991db-6bfb-4ffe-bc82-d1533f41844b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03,PodSandboxId:42082ba41e76d8021fb2a7fc27b8c0614c18fccad25c7363a522647e80db058d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725993715778027536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t7nl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45253c4d-0c1a-4ef4-9777-3b8a1ad4c4b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=940caf46-fbf1-49a4-bfcc-67f90d8ea8f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e0ec9808cc401       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   23 seconds ago       Running             kube-controller-manager   2                   e6fadd9a4845f       kube-controller-manager-pause-459729
	6b866e4cc28dd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   23 seconds ago       Running             kube-scheduler            2                   4b1b11b89be19       kube-scheduler-pause-459729
	732d62087adce       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   23 seconds ago       Running             kube-apiserver            2                   def0ff0bbe3e7       kube-apiserver-pause-459729
	0c9ab82089fc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   35 seconds ago       Running             etcd                      2                   474db1987e972       etcd-pause-459729
	400ae9ffe07da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   46 seconds ago       Running             coredns                   1                   05f0f1bc6c34e       coredns-6f6b679f8f-t7nl7
	a44ca5d97d3ae       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   46 seconds ago       Running             kube-proxy                1                   bcb0074ddaf36       kube-proxy-6f9ft
	616b2c77b30d4       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   46 seconds ago       Exited              kube-controller-manager   1                   e6fadd9a4845f       kube-controller-manager-pause-459729
	887b204becab4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   46 seconds ago       Exited              etcd                      1                   474db1987e972       etcd-pause-459729
	e7296148ec9e1       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   46 seconds ago       Exited              kube-scheduler            1                   4b1b11b89be19       kube-scheduler-pause-459729
	d8b4bdfe79b76       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   47 seconds ago       Exited              kube-apiserver            1                   def0ff0bbe3e7       kube-apiserver-pause-459729
	28c554d2f7dee       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   b8e1ea111f3c6       kube-proxy-6f9ft
	9f2773d9050f9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   42082ba41e76d       coredns-6f6b679f8f-t7nl7
	
	
	==> coredns [400ae9ffe07da788b92b399c1c7de34a9e0c78250ee033231c3584c775006d6c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57810 - 35147 "HINFO IN 8656652298870847717.8343541974218492693. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010495775s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1735866020]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:42:43.504) (total time: 10001ms):
	Trace[1735866020]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:53.505)
	Trace[1735866020]: [10.001375551s] [10.001375551s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1126934098]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:42:43.503) (total time: 10002ms):
	Trace[1126934098]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:42:53.505)
	Trace[1126934098]: [10.002353503s] [10.002353503s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[373000815]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:42:43.504) (total time: 10001ms):
	Trace[373000815]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:53.505)
	Trace[373000815]: [10.001451296s] [10.001451296s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9f2773d9050f95c3f38aafc5ac2cd12063952f294a075fc1d401576aaaceac03] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[621585235]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:41:56.111) (total time: 28748ms):
	Trace[621585235]: ---"Objects listed" error:<nil> 28748ms (18:42:24.859)
	Trace[621585235]: [28.748792278s] [28.748792278s] END
	[INFO] plugin/kubernetes: Trace[203482088]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:41:56.109) (total time: 28750ms):
	Trace[203482088]: ---"Objects listed" error:<nil> 28750ms (18:42:24.860)
	Trace[203482088]: [28.750366005s] [28.750366005s] END
	[INFO] plugin/kubernetes: Trace[1264046127]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Sep-2024 18:41:56.111) (total time: 28749ms):
	Trace[1264046127]: ---"Objects listed" error:<nil> 28749ms (18:42:24.860)
	Trace[1264046127]: [28.749417038s] [28.749417038s] END
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37650 - 178 "HINFO IN 7572563276532883976.1939927841076177656. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009368328s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-459729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-459729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=pause-459729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_41_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:41:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-459729
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 18:43:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 18:43:08 +0000   Tue, 10 Sep 2024 18:41:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 18:43:08 +0000   Tue, 10 Sep 2024 18:41:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 18:43:08 +0000   Tue, 10 Sep 2024 18:41:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 18:43:08 +0000   Tue, 10 Sep 2024 18:41:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.99
	  Hostname:    pause-459729
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1eff2ee73484525963ca3613f35c68e
	  System UUID:                e1eff2ee-7348-4525-963c-a3613f35c68e
	  Boot ID:                    8607bc42-0597-406d-af20-406002aaa270
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-t7nl7                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     95s
	  kube-system                 etcd-pause-459729                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         101s
	  kube-system                 kube-apiserver-pause-459729             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-pause-459729    200m (10%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-proxy-6f9ft                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-scheduler-pause-459729             100m (5%)     0 (0%)      0 (0%)           0 (0%)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 93s                kube-proxy       
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeAllocatableEnforced  100s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  100s               kubelet          Node pause-459729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s               kubelet          Node pause-459729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s               kubelet          Node pause-459729 status is now: NodeHasSufficientPID
	  Normal  Starting                 100s               kubelet          Starting kubelet.
	  Normal  NodeReady                99s                kubelet          Node pause-459729 status is now: NodeReady
	  Normal  RegisteredNode           96s                node-controller  Node pause-459729 event: Registered Node pause-459729 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-459729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-459729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-459729 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-459729 event: Registered Node pause-459729 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.261497] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.086710] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074657] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.238765] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.168642] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.379200] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +4.229102] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[  +0.069476] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.266396] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +6.563568] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.091635] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.343663] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[  +0.150839] kauditd_printk_skb: 18 callbacks suppressed
	[Sep10 18:42] kauditd_printk_skb: 97 callbacks suppressed
	[ +33.816879] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[  +0.151690] systemd-fstab-generator[2235]: Ignoring "noauto" option for root device
	[  +0.183951] systemd-fstab-generator[2249]: Ignoring "noauto" option for root device
	[  +0.149466] systemd-fstab-generator[2261]: Ignoring "noauto" option for root device
	[  +0.304783] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.697376] systemd-fstab-generator[2411]: Ignoring "noauto" option for root device
	[  +8.323407] kauditd_printk_skb: 197 callbacks suppressed
	[Sep10 18:43] systemd-fstab-generator[3303]: Ignoring "noauto" option for root device
	[  +8.286267] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.476277] systemd-fstab-generator[3540]: Ignoring "noauto" option for root device
	
	
	==> etcd [0c9ab82089fc93814bf4c345b95ac1779cd3a315bc9e9bcce2123bd5b92c6fd9] <==
	{"level":"info","ts":"2024-09-10T18:42:53.798952Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.99:2380"}
	{"level":"info","ts":"2024-09-10T18:42:53.797823Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:53.799027Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:53.799055Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:53.798154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 switched to configuration voters=(3860576149566711699)"}
	{"level":"info","ts":"2024-09-10T18:42:53.797717Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-10T18:42:53.799195Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","added-peer-id":"3593858dc744e793","added-peer-peer-urls":["https://192.168.50.99:2380"]}
	{"level":"info","ts":"2024-09-10T18:42:53.799391Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:42:53.799443Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:42:55.385411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-10T18:42:55.385506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T18:42:55.385623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 received MsgPreVoteResp from 3593858dc744e793 at term 2"}
	{"level":"info","ts":"2024-09-10T18:42:55.385653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T18:42:55.385661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 received MsgVoteResp from 3593858dc744e793 at term 3"}
	{"level":"info","ts":"2024-09-10T18:42:55.385675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 became leader at term 3"}
	{"level":"info","ts":"2024-09-10T18:42:55.385695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3593858dc744e793 elected leader 3593858dc744e793 at term 3"}
	{"level":"info","ts":"2024-09-10T18:42:55.391115Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3593858dc744e793","local-member-attributes":"{Name:pause-459729 ClientURLs:[https://192.168.50.99:2379]}","request-path":"/0/members/3593858dc744e793/attributes","cluster-id":"be49f23f45b186b5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:42:55.391133Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:42:55.391433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T18:42:55.391456Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T18:42:55.391482Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T18:42:55.392670Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:42:55.392744Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:42:55.393849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T18:42:55.393849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.99:2379"}
	
	
	==> etcd [887b204becab401e5b3be83d69dacee629b065ccd9ab9732c3fcf630fab0e154] <==
	{"level":"info","ts":"2024-09-10T18:42:43.042116Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-10T18:42:43.089797Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","commit-index":419}
	{"level":"info","ts":"2024-09-10T18:42:43.090159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-10T18:42:43.095953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 became follower at term 2"}
	{"level":"info","ts":"2024-09-10T18:42:43.096637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 3593858dc744e793 [peers: [], term: 2, commit: 419, applied: 0, lastindex: 419, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-10T18:42:43.100858Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-10T18:42:43.106624Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":400}
	{"level":"info","ts":"2024-09-10T18:42:43.111962Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-10T18:42:43.127467Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"3593858dc744e793","timeout":"7s"}
	{"level":"info","ts":"2024-09-10T18:42:43.130121Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"3593858dc744e793"}
	{"level":"info","ts":"2024-09-10T18:42:43.130201Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"3593858dc744e793","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-10T18:42:43.130751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T18:42:43.130932Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-10T18:42:43.131068Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:43.134743Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:43.134782Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-10T18:42:43.131304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3593858dc744e793 switched to configuration voters=(3860576149566711699)"}
	{"level":"info","ts":"2024-09-10T18:42:43.134925Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","added-peer-id":"3593858dc744e793","added-peer-peer-urls":["https://192.168.50.99:2380"]}
	{"level":"info","ts":"2024-09-10T18:42:43.135039Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"be49f23f45b186b5","local-member-id":"3593858dc744e793","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:42:43.135088Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T18:42:43.137474Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T18:42:43.137708Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.99:2380"}
	{"level":"info","ts":"2024-09-10T18:42:43.137719Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.99:2380"}
	{"level":"info","ts":"2024-09-10T18:42:43.143249Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3593858dc744e793","initial-advertise-peer-urls":["https://192.168.50.99:2380"],"listen-peer-urls":["https://192.168.50.99:2380"],"advertise-client-urls":["https://192.168.50.99:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.99:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T18:42:43.143312Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 18:43:29 up 2 min,  0 users,  load average: 0.52, 0.21, 0.08
	Linux pause-459729 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [732d62087adce4a67e4113a8aea919ec8ab4acc910f2ed1ddc42430c05c8d541] <==
	I0910 18:43:08.767848       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0910 18:43:08.768016       1 aggregator.go:171] initial CRD sync complete...
	I0910 18:43:08.768055       1 autoregister_controller.go:144] Starting autoregister controller
	I0910 18:43:08.768080       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 18:43:08.768102       1 cache.go:39] Caches are synced for autoregister controller
	I0910 18:43:08.776045       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0910 18:43:08.776112       1 policy_source.go:224] refreshing policies
	I0910 18:43:08.794855       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 18:43:08.797290       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0910 18:43:08.797974       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0910 18:43:08.798001       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0910 18:43:08.799038       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 18:43:08.799421       1 shared_informer.go:320] Caches are synced for configmaps
	I0910 18:43:08.799506       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0910 18:43:08.799842       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0910 18:43:08.807308       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0910 18:43:09.602721       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0910 18:43:09.813061       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.99]
	I0910 18:43:09.814147       1 controller.go:615] quota admission added evaluator for: endpoints
	I0910 18:43:09.819365       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 18:43:10.019207       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0910 18:43:10.030961       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0910 18:43:10.074870       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0910 18:43:10.110098       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 18:43:10.117357       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe] <==
	I0910 18:43:01.717171       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0910 18:43:01.717214       1 controller.go:132] Ending legacy_token_tracking_controller
	I0910 18:43:01.717220       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0910 18:43:01.717237       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0910 18:43:01.717248       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0910 18:43:01.717802       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0910 18:43:01.717959       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0910 18:43:01.718435       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0910 18:43:01.718477       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0910 18:43:01.718506       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0910 18:43:01.718636       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0910 18:43:01.720441       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0910 18:43:01.720685       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0910 18:43:01.723646       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0910 18:43:01.725664       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0910 18:43:01.749619       1 controller.go:157] Shutting down quota evaluator
	I0910 18:43:01.750067       1 controller.go:176] quota evaluator worker shutdown
	I0910 18:43:01.750114       1 controller.go:176] quota evaluator worker shutdown
	I0910 18:43:01.750125       1 controller.go:176] quota evaluator worker shutdown
	I0910 18:43:01.750133       1 controller.go:176] quota evaluator worker shutdown
	I0910 18:43:01.750140       1 controller.go:176] quota evaluator worker shutdown
	E0910 18:43:02.379960       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0910 18:43:02.380297       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0910 18:43:03.380218       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0910 18:43:03.380644       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a] <==
	
	
	==> kube-controller-manager [e0ec9808cc4014d60aa3e6d0131ef94422e6d5971e058f56f7d6f9529ed776e3] <==
	I0910 18:43:12.047153       1 shared_informer.go:320] Caches are synced for disruption
	I0910 18:43:12.048322       1 shared_informer.go:320] Caches are synced for cronjob
	I0910 18:43:12.050618       1 shared_informer.go:320] Caches are synced for TTL
	I0910 18:43:12.063917       1 shared_informer.go:320] Caches are synced for node
	I0910 18:43:12.064002       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0910 18:43:12.064024       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0910 18:43:12.064046       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0910 18:43:12.064053       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0910 18:43:12.064124       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-459729"
	I0910 18:43:12.069578       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0910 18:43:12.086034       1 shared_informer.go:320] Caches are synced for endpoint
	I0910 18:43:12.136040       1 shared_informer.go:320] Caches are synced for attach detach
	I0910 18:43:12.202086       1 shared_informer.go:320] Caches are synced for taint
	I0910 18:43:12.202308       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0910 18:43:12.202456       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-459729"
	I0910 18:43:12.202610       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0910 18:43:12.241351       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 18:43:12.286501       1 shared_informer.go:320] Caches are synced for resource quota
	I0910 18:43:12.670919       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 18:43:12.735402       1 shared_informer.go:320] Caches are synced for garbage collector
	I0910 18:43:12.735446       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0910 18:43:15.772985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="37.731468ms"
	I0910 18:43:15.773106       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.45µs"
	I0910 18:43:15.794857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="20.614211ms"
	I0910 18:43:15.795087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="103.774µs"
	
	
	==> kube-proxy [28c554d2f7dee4e29def9c7a65b6a5ff57f819fb17da98d03f21775e5cacde12] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:41:56.165158       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:41:56.175071       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.99"]
	E0910 18:41:56.175160       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:41:56.219298       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:41:56.219347       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:41:56.219387       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:41:56.223062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:41:56.223319       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:41:56.223330       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:41:56.226208       1 config.go:197] "Starting service config controller"
	I0910 18:41:56.226254       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:41:56.227930       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:41:56.227962       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:41:56.228497       1 config.go:326] "Starting node config controller"
	I0910 18:41:56.232786       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:41:56.326334       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:41:56.328573       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:41:56.335109       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a44ca5d97d3aea67a6faa8d29de9af945346902b504f41541efc91ce9c20cc02] <==
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:42:53.734823       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-459729\": net/http: TLS handshake timeout"
	I0910 18:43:01.602973       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.99"]
	E0910 18:43:01.603310       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:43:01.646869       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:43:01.646943       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:43:01.646977       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:43:01.650362       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:43:01.650809       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:43:01.650851       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:43:01.653685       1 config.go:197] "Starting service config controller"
	I0910 18:43:01.653749       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:43:01.653788       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:43:01.653814       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:43:01.655125       1 config.go:326] "Starting node config controller"
	I0910 18:43:01.655226       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:43:01.754617       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:43:01.754689       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:43:01.756046       1 shared_informer.go:320] Caches are synced for node config
	E0910 18:43:08.735821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)" logger="UnhandledError"
	E0910 18:43:08.735993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0910 18:43:08.736102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	
	
	==> kube-scheduler [6b866e4cc28dd3cf4f944df54572ae08b824ef75d314923b0db1c4f5af3425b6] <==
	I0910 18:43:07.411009       1 serving.go:386] Generated self-signed cert in-memory
	W0910 18:43:08.689903       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 18:43:08.690126       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 18:43:08.690165       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 18:43:08.690314       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 18:43:08.738994       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 18:43:08.744605       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:43:08.753484       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 18:43:08.754732       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 18:43:08.755820       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 18:43:08.754763       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 18:43:08.857070       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223] <==
	I0910 18:42:43.837228       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.533969    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7619d7b95fd844b416116d997fb3ebe3-flexvolume-dir\") pod \"kube-controller-manager-pause-459729\" (UID: \"7619d7b95fd844b416116d997fb3ebe3\") " pod="kube-system/kube-controller-manager-pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.533989    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7619d7b95fd844b416116d997fb3ebe3-kubeconfig\") pod \"kube-controller-manager-pause-459729\" (UID: \"7619d7b95fd844b416116d997fb3ebe3\") " pod="kube-system/kube-controller-manager-pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.534017    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7619d7b95fd844b416116d997fb3ebe3-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-459729\" (UID: \"7619d7b95fd844b416116d997fb3ebe3\") " pod="kube-system/kube-controller-manager-pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: E0910 18:43:05.535210    3310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-459729?timeout=10s\": dial tcp 192.168.50.99:8443: connect: connection refused" interval="400ms"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.700003    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: E0910 18:43:05.701156    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.99:8443: connect: connection refused" node="pause-459729"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.786100    3310 scope.go:117] "RemoveContainer" containerID="d8b4bdfe79b76c4e19d5fb554dbaa75db3ea828999d2c94e0485f001f018b6fe"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.787437    3310 scope.go:117] "RemoveContainer" containerID="616b2c77b30d4228cd00b109a92ad85ff572cc4e317aad2d290f7c3093bef10a"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: I0910 18:43:05.788743    3310 scope.go:117] "RemoveContainer" containerID="e7296148ec9e14e597ba27cb7da7aaf117de63f5939640facb7f28fb7bc28223"
	Sep 10 18:43:05 pause-459729 kubelet[3310]: E0910 18:43:05.937496    3310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-459729?timeout=10s\": dial tcp 192.168.50.99:8443: connect: connection refused" interval="800ms"
	Sep 10 18:43:06 pause-459729 kubelet[3310]: I0910 18:43:06.102670    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-459729"
	Sep 10 18:43:06 pause-459729 kubelet[3310]: E0910 18:43:06.103787    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.99:8443: connect: connection refused" node="pause-459729"
	Sep 10 18:43:06 pause-459729 kubelet[3310]: I0910 18:43:06.905603    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-459729"
	Sep 10 18:43:08 pause-459729 kubelet[3310]: I0910 18:43:08.878929    3310 kubelet_node_status.go:111] "Node was previously registered" node="pause-459729"
	Sep 10 18:43:08 pause-459729 kubelet[3310]: I0910 18:43:08.879027    3310 kubelet_node_status.go:75] "Successfully registered node" node="pause-459729"
	Sep 10 18:43:08 pause-459729 kubelet[3310]: I0910 18:43:08.879048    3310 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 10 18:43:08 pause-459729 kubelet[3310]: I0910 18:43:08.880008    3310 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 10 18:43:09 pause-459729 kubelet[3310]: I0910 18:43:09.308406    3310 apiserver.go:52] "Watching apiserver"
	Sep 10 18:43:09 pause-459729 kubelet[3310]: I0910 18:43:09.329243    3310 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 10 18:43:09 pause-459729 kubelet[3310]: I0910 18:43:09.428657    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3e991db-6bfb-4ffe-bc82-d1533f41844b-lib-modules\") pod \"kube-proxy-6f9ft\" (UID: \"d3e991db-6bfb-4ffe-bc82-d1533f41844b\") " pod="kube-system/kube-proxy-6f9ft"
	Sep 10 18:43:09 pause-459729 kubelet[3310]: I0910 18:43:09.428729    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3e991db-6bfb-4ffe-bc82-d1533f41844b-xtables-lock\") pod \"kube-proxy-6f9ft\" (UID: \"d3e991db-6bfb-4ffe-bc82-d1533f41844b\") " pod="kube-system/kube-proxy-6f9ft"
	Sep 10 18:43:15 pause-459729 kubelet[3310]: E0910 18:43:15.402337    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993795401990823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:43:15 pause-459729 kubelet[3310]: E0910 18:43:15.402701    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993795401990823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:43:25 pause-459729 kubelet[3310]: E0910 18:43:25.404078    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993805403863631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 18:43:25 pause-459729 kubelet[3310]: E0910 18:43:25.404125    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725993805403863631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-459729 -n pause-459729
helpers_test.go:261: (dbg) Run:  kubectl --context pause-459729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (57.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (300.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-432422 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-432422 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m0.131984068s)

                                                
                                                
-- stdout --
	* [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:49:12.591122   64489 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:49:12.591221   64489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:49:12.591231   64489 out.go:358] Setting ErrFile to fd 2...
	I0910 18:49:12.591240   64489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:49:12.591449   64489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:49:12.592049   64489 out.go:352] Setting JSON to false
	I0910 18:49:12.593106   64489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5505,"bootTime":1725988648,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:49:12.593165   64489 start.go:139] virtualization: kvm guest
	I0910 18:49:12.595363   64489 out.go:177] * [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:49:12.596709   64489 notify.go:220] Checking for updates...
	I0910 18:49:12.596713   64489 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:49:12.597936   64489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:49:12.599149   64489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:49:12.600399   64489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:49:12.601469   64489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:49:12.602558   64489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:49:12.604053   64489 config.go:182] Loaded profile config "bridge-642043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:49:12.604219   64489 config.go:182] Loaded profile config "enable-default-cni-642043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:49:12.604342   64489 config.go:182] Loaded profile config "flannel-642043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:49:12.604461   64489 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:49:12.647966   64489 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 18:49:12.649389   64489 start.go:297] selected driver: kvm2
	I0910 18:49:12.649408   64489 start.go:901] validating driver "kvm2" against <nil>
	I0910 18:49:12.649423   64489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:49:12.650228   64489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:49:12.650335   64489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:49:12.669762   64489 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:49:12.669821   64489 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 18:49:12.670077   64489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:49:12.670151   64489 cni.go:84] Creating CNI manager for ""
	I0910 18:49:12.670165   64489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:49:12.670176   64489 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 18:49:12.670249   64489 start.go:340] cluster config:
	{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:49:12.670378   64489 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:49:12.672299   64489 out.go:177] * Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	I0910 18:49:12.673578   64489 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:49:12.673626   64489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:49:12.673635   64489 cache.go:56] Caching tarball of preloaded images
	I0910 18:49:12.673719   64489 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:49:12.673733   64489 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0910 18:49:12.673863   64489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:49:12.673886   64489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json: {Name:mk9bb75484e7b3c907f6748cb62956ffa1c68a9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:49:12.674055   64489 start.go:360] acquireMachinesLock for old-k8s-version-432422: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:49:39.349925   64489 start.go:364] duration metric: took 26.675845238s to acquireMachinesLock for "old-k8s-version-432422"
	I0910 18:49:39.349987   64489 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 18:49:39.350131   64489 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 18:49:39.352081   64489 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 18:49:39.352291   64489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:49:39.352354   64489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:49:39.370040   64489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0910 18:49:39.370496   64489 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:49:39.371029   64489 main.go:141] libmachine: Using API Version  1
	I0910 18:49:39.371052   64489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:49:39.371402   64489 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:49:39.371595   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:49:39.371755   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:49:39.371905   64489 start.go:159] libmachine.API.Create for "old-k8s-version-432422" (driver="kvm2")
	I0910 18:49:39.371930   64489 client.go:168] LocalClient.Create starting
	I0910 18:49:39.371955   64489 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 18:49:39.371985   64489 main.go:141] libmachine: Decoding PEM data...
	I0910 18:49:39.372002   64489 main.go:141] libmachine: Parsing certificate...
	I0910 18:49:39.372060   64489 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 18:49:39.372085   64489 main.go:141] libmachine: Decoding PEM data...
	I0910 18:49:39.372110   64489 main.go:141] libmachine: Parsing certificate...
	I0910 18:49:39.372136   64489 main.go:141] libmachine: Running pre-create checks...
	I0910 18:49:39.372155   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .PreCreateCheck
	I0910 18:49:39.372499   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:49:39.372887   64489 main.go:141] libmachine: Creating machine...
	I0910 18:49:39.372900   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .Create
	I0910 18:49:39.373037   64489 main.go:141] libmachine: (old-k8s-version-432422) Creating KVM machine...
	I0910 18:49:39.374349   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found existing default KVM network
	I0910 18:49:39.375928   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:39.375756   64854 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f7:ff:e6} reservation:<nil>}
	I0910 18:49:39.376937   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:39.376816   64854 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:28:f7:4e} reservation:<nil>}
	I0910 18:49:39.378041   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:39.377955   64854 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000366950}
	I0910 18:49:39.378064   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | created network xml: 
	I0910 18:49:39.378076   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | <network>
	I0910 18:49:39.378091   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG |   <name>mk-old-k8s-version-432422</name>
	I0910 18:49:39.378104   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG |   <dns enable='no'/>
	I0910 18:49:39.378111   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG |   
	I0910 18:49:39.378121   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0910 18:49:39.378137   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG |     <dhcp>
	I0910 18:49:39.378151   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0910 18:49:39.378161   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG |     </dhcp>
	I0910 18:49:39.378169   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG |   </ip>
	I0910 18:49:39.378178   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG |   
	I0910 18:49:39.378186   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | </network>
	I0910 18:49:39.378197   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | 
	I0910 18:49:39.383474   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | trying to create private KVM network mk-old-k8s-version-432422 192.168.61.0/24...
	I0910 18:49:39.456774   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | private KVM network mk-old-k8s-version-432422 192.168.61.0/24 created
	I0910 18:49:39.456810   64489 main.go:141] libmachine: (old-k8s-version-432422) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422 ...
	I0910 18:49:39.456824   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:39.456718   64854 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:49:39.456861   64489 main.go:141] libmachine: (old-k8s-version-432422) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 18:49:39.456879   64489 main.go:141] libmachine: (old-k8s-version-432422) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 18:49:39.700598   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:39.700504   64854 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa...
	I0910 18:49:39.892855   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:39.892730   64854 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/old-k8s-version-432422.rawdisk...
	I0910 18:49:39.892884   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Writing magic tar header
	I0910 18:49:39.892901   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Writing SSH key tar header
	I0910 18:49:39.892914   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:39.892840   64854 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422 ...
	I0910 18:49:39.892983   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422
	I0910 18:49:39.893046   64489 main.go:141] libmachine: (old-k8s-version-432422) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422 (perms=drwx------)
	I0910 18:49:39.893063   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 18:49:39.893090   64489 main.go:141] libmachine: (old-k8s-version-432422) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 18:49:39.893109   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:49:39.893124   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 18:49:39.893136   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 18:49:39.893149   64489 main.go:141] libmachine: (old-k8s-version-432422) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 18:49:39.893170   64489 main.go:141] libmachine: (old-k8s-version-432422) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 18:49:39.893183   64489 main.go:141] libmachine: (old-k8s-version-432422) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 18:49:39.893206   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Checking permissions on dir: /home/jenkins
	I0910 18:49:39.893232   64489 main.go:141] libmachine: (old-k8s-version-432422) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 18:49:39.893244   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Checking permissions on dir: /home
	I0910 18:49:39.893256   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Skipping /home - not owner
	I0910 18:49:39.893268   64489 main.go:141] libmachine: (old-k8s-version-432422) Creating domain...
	I0910 18:49:39.894541   64489 main.go:141] libmachine: (old-k8s-version-432422) define libvirt domain using xml: 
	I0910 18:49:39.894566   64489 main.go:141] libmachine: (old-k8s-version-432422) <domain type='kvm'>
	I0910 18:49:39.894578   64489 main.go:141] libmachine: (old-k8s-version-432422)   <name>old-k8s-version-432422</name>
	I0910 18:49:39.894586   64489 main.go:141] libmachine: (old-k8s-version-432422)   <memory unit='MiB'>2200</memory>
	I0910 18:49:39.894609   64489 main.go:141] libmachine: (old-k8s-version-432422)   <vcpu>2</vcpu>
	I0910 18:49:39.894625   64489 main.go:141] libmachine: (old-k8s-version-432422)   <features>
	I0910 18:49:39.894638   64489 main.go:141] libmachine: (old-k8s-version-432422)     <acpi/>
	I0910 18:49:39.894653   64489 main.go:141] libmachine: (old-k8s-version-432422)     <apic/>
	I0910 18:49:39.894665   64489 main.go:141] libmachine: (old-k8s-version-432422)     <pae/>
	I0910 18:49:39.894675   64489 main.go:141] libmachine: (old-k8s-version-432422)     
	I0910 18:49:39.894683   64489 main.go:141] libmachine: (old-k8s-version-432422)   </features>
	I0910 18:49:39.894693   64489 main.go:141] libmachine: (old-k8s-version-432422)   <cpu mode='host-passthrough'>
	I0910 18:49:39.894701   64489 main.go:141] libmachine: (old-k8s-version-432422)   
	I0910 18:49:39.894707   64489 main.go:141] libmachine: (old-k8s-version-432422)   </cpu>
	I0910 18:49:39.894715   64489 main.go:141] libmachine: (old-k8s-version-432422)   <os>
	I0910 18:49:39.894744   64489 main.go:141] libmachine: (old-k8s-version-432422)     <type>hvm</type>
	I0910 18:49:39.894756   64489 main.go:141] libmachine: (old-k8s-version-432422)     <boot dev='cdrom'/>
	I0910 18:49:39.894764   64489 main.go:141] libmachine: (old-k8s-version-432422)     <boot dev='hd'/>
	I0910 18:49:39.894776   64489 main.go:141] libmachine: (old-k8s-version-432422)     <bootmenu enable='no'/>
	I0910 18:49:39.894785   64489 main.go:141] libmachine: (old-k8s-version-432422)   </os>
	I0910 18:49:39.894792   64489 main.go:141] libmachine: (old-k8s-version-432422)   <devices>
	I0910 18:49:39.894800   64489 main.go:141] libmachine: (old-k8s-version-432422)     <disk type='file' device='cdrom'>
	I0910 18:49:39.894818   64489 main.go:141] libmachine: (old-k8s-version-432422)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/boot2docker.iso'/>
	I0910 18:49:39.894833   64489 main.go:141] libmachine: (old-k8s-version-432422)       <target dev='hdc' bus='scsi'/>
	I0910 18:49:39.894848   64489 main.go:141] libmachine: (old-k8s-version-432422)       <readonly/>
	I0910 18:49:39.894858   64489 main.go:141] libmachine: (old-k8s-version-432422)     </disk>
	I0910 18:49:39.894871   64489 main.go:141] libmachine: (old-k8s-version-432422)     <disk type='file' device='disk'>
	I0910 18:49:39.894884   64489 main.go:141] libmachine: (old-k8s-version-432422)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 18:49:39.894899   64489 main.go:141] libmachine: (old-k8s-version-432422)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/old-k8s-version-432422.rawdisk'/>
	I0910 18:49:39.894914   64489 main.go:141] libmachine: (old-k8s-version-432422)       <target dev='hda' bus='virtio'/>
	I0910 18:49:39.894926   64489 main.go:141] libmachine: (old-k8s-version-432422)     </disk>
	I0910 18:49:39.894936   64489 main.go:141] libmachine: (old-k8s-version-432422)     <interface type='network'>
	I0910 18:49:39.894949   64489 main.go:141] libmachine: (old-k8s-version-432422)       <source network='mk-old-k8s-version-432422'/>
	I0910 18:49:39.894962   64489 main.go:141] libmachine: (old-k8s-version-432422)       <model type='virtio'/>
	I0910 18:49:39.894971   64489 main.go:141] libmachine: (old-k8s-version-432422)     </interface>
	I0910 18:49:39.894979   64489 main.go:141] libmachine: (old-k8s-version-432422)     <interface type='network'>
	I0910 18:49:39.894989   64489 main.go:141] libmachine: (old-k8s-version-432422)       <source network='default'/>
	I0910 18:49:39.894996   64489 main.go:141] libmachine: (old-k8s-version-432422)       <model type='virtio'/>
	I0910 18:49:39.895007   64489 main.go:141] libmachine: (old-k8s-version-432422)     </interface>
	I0910 18:49:39.895015   64489 main.go:141] libmachine: (old-k8s-version-432422)     <serial type='pty'>
	I0910 18:49:39.895027   64489 main.go:141] libmachine: (old-k8s-version-432422)       <target port='0'/>
	I0910 18:49:39.895040   64489 main.go:141] libmachine: (old-k8s-version-432422)     </serial>
	I0910 18:49:39.895053   64489 main.go:141] libmachine: (old-k8s-version-432422)     <console type='pty'>
	I0910 18:49:39.895067   64489 main.go:141] libmachine: (old-k8s-version-432422)       <target type='serial' port='0'/>
	I0910 18:49:39.895079   64489 main.go:141] libmachine: (old-k8s-version-432422)     </console>
	I0910 18:49:39.895090   64489 main.go:141] libmachine: (old-k8s-version-432422)     <rng model='virtio'>
	I0910 18:49:39.895104   64489 main.go:141] libmachine: (old-k8s-version-432422)       <backend model='random'>/dev/random</backend>
	I0910 18:49:39.895125   64489 main.go:141] libmachine: (old-k8s-version-432422)     </rng>
	I0910 18:49:39.895139   64489 main.go:141] libmachine: (old-k8s-version-432422)     
	I0910 18:49:39.895149   64489 main.go:141] libmachine: (old-k8s-version-432422)     
	I0910 18:49:39.895175   64489 main.go:141] libmachine: (old-k8s-version-432422)   </devices>
	I0910 18:49:39.895194   64489 main.go:141] libmachine: (old-k8s-version-432422) </domain>
	I0910 18:49:39.895209   64489 main.go:141] libmachine: (old-k8s-version-432422) 
	I0910 18:49:39.899881   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:f4:0d:74 in network default
	I0910 18:49:39.900607   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:39.900664   64489 main.go:141] libmachine: (old-k8s-version-432422) Ensuring networks are active...
	I0910 18:49:39.901432   64489 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network default is active
	I0910 18:49:39.901808   64489 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network mk-old-k8s-version-432422 is active
	I0910 18:49:39.902412   64489 main.go:141] libmachine: (old-k8s-version-432422) Getting domain xml...
	I0910 18:49:39.903264   64489 main.go:141] libmachine: (old-k8s-version-432422) Creating domain...
	I0910 18:49:41.377701   64489 main.go:141] libmachine: (old-k8s-version-432422) Waiting to get IP...
	I0910 18:49:41.378686   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:41.379320   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:41.379358   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:41.379306   64854 retry.go:31] will retry after 285.927865ms: waiting for machine to come up
	I0910 18:49:41.666958   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:41.667618   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:41.667644   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:41.667595   64854 retry.go:31] will retry after 323.196918ms: waiting for machine to come up
	I0910 18:49:41.994790   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:41.995400   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:41.995420   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:41.995343   64854 retry.go:31] will retry after 433.533829ms: waiting for machine to come up
	I0910 18:49:42.430691   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:42.431295   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:42.431330   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:42.431233   64854 retry.go:31] will retry after 376.573573ms: waiting for machine to come up
	I0910 18:49:42.810267   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:42.821178   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:42.821202   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:42.817213   64854 retry.go:31] will retry after 642.546369ms: waiting for machine to come up
	I0910 18:49:43.461360   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:43.461856   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:43.461884   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:43.461834   64854 retry.go:31] will retry after 636.511384ms: waiting for machine to come up
	I0910 18:49:44.101683   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:44.104261   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:44.104291   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:44.104214   64854 retry.go:31] will retry after 1.026229685s: waiting for machine to come up
	I0910 18:49:45.132191   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:45.132984   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:45.133013   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:45.132947   64854 retry.go:31] will retry after 1.336087899s: waiting for machine to come up
	I0910 18:49:46.470260   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:46.470968   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:46.470994   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:46.470888   64854 retry.go:31] will retry after 1.506670277s: waiting for machine to come up
	I0910 18:49:47.979062   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:47.979495   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:47.979521   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:47.979423   64854 retry.go:31] will retry after 2.284315854s: waiting for machine to come up
	I0910 18:49:50.265181   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:50.265602   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:50.265631   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:50.265524   64854 retry.go:31] will retry after 2.267263482s: waiting for machine to come up
	I0910 18:49:52.534487   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:52.535075   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:52.535104   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:52.535035   64854 retry.go:31] will retry after 2.577084278s: waiting for machine to come up
	I0910 18:49:55.114038   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:55.114528   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:55.114564   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:55.114471   64854 retry.go:31] will retry after 4.15190677s: waiting for machine to come up
	I0910 18:49:59.270120   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:49:59.270656   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:49:59.270683   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:49:59.270606   64854 retry.go:31] will retry after 5.631932411s: waiting for machine to come up
	I0910 18:50:04.903690   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:04.904129   64489 main.go:141] libmachine: (old-k8s-version-432422) Found IP for machine: 192.168.61.51
	I0910 18:50:04.904155   64489 main.go:141] libmachine: (old-k8s-version-432422) Reserving static IP address...
	I0910 18:50:04.904169   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has current primary IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:04.904508   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"} in network mk-old-k8s-version-432422
	I0910 18:50:04.986541   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Getting to WaitForSSH function...
	I0910 18:50:04.986573   64489 main.go:141] libmachine: (old-k8s-version-432422) Reserved static IP address: 192.168.61.51
	I0910 18:50:04.986586   64489 main.go:141] libmachine: (old-k8s-version-432422) Waiting for SSH to be available...
	I0910 18:50:04.989144   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:04.989640   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:04.989670   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:04.989802   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH client type: external
	I0910 18:50:04.989822   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa (-rw-------)
	I0910 18:50:04.989850   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:50:04.989860   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | About to run SSH command:
	I0910 18:50:04.989880   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | exit 0
	I0910 18:50:05.116991   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | SSH cmd err, output: <nil>: 
	I0910 18:50:05.117304   64489 main.go:141] libmachine: (old-k8s-version-432422) KVM machine creation complete!
	I0910 18:50:05.117623   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:50:05.118177   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:50:05.118390   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:50:05.118576   64489 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 18:50:05.118595   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetState
	I0910 18:50:05.120014   64489 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 18:50:05.120027   64489 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 18:50:05.120033   64489 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 18:50:05.120039   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:05.122604   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.123130   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:05.123152   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.123278   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:05.123454   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.123605   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.123742   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:05.123916   64489 main.go:141] libmachine: Using SSH client type: native
	I0910 18:50:05.124151   64489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:50:05.124165   64489 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 18:50:05.233000   64489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:50:05.233024   64489 main.go:141] libmachine: Detecting the provisioner...
	I0910 18:50:05.233034   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:05.235899   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.236349   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:05.236380   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.236580   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:05.236744   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.236904   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.237220   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:05.237402   64489 main.go:141] libmachine: Using SSH client type: native
	I0910 18:50:05.237576   64489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:50:05.237587   64489 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 18:50:05.341847   64489 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 18:50:05.341924   64489 main.go:141] libmachine: found compatible host: buildroot
	I0910 18:50:05.341936   64489 main.go:141] libmachine: Provisioning with buildroot...
	I0910 18:50:05.341944   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:50:05.342168   64489 buildroot.go:166] provisioning hostname "old-k8s-version-432422"
	I0910 18:50:05.342193   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:50:05.342397   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:05.344682   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.345099   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:05.345129   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.345257   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:05.345442   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.345616   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.345740   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:05.345888   64489 main.go:141] libmachine: Using SSH client type: native
	I0910 18:50:05.346091   64489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:50:05.346109   64489 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-432422 && echo "old-k8s-version-432422" | sudo tee /etc/hostname
	I0910 18:50:05.461639   64489 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-432422
	
	I0910 18:50:05.461668   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:05.464585   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.464906   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:05.464935   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.465090   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:05.465262   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.465415   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.465548   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:05.465709   64489 main.go:141] libmachine: Using SSH client type: native
	I0910 18:50:05.465910   64489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:50:05.465935   64489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-432422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-432422/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-432422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:50:05.586317   64489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:50:05.586346   64489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:50:05.586383   64489 buildroot.go:174] setting up certificates
	I0910 18:50:05.586399   64489 provision.go:84] configureAuth start
	I0910 18:50:05.586413   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:50:05.586676   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:50:05.589328   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.589635   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:05.589657   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.589813   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:05.591998   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.592357   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:05.592382   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.592501   64489 provision.go:143] copyHostCerts
	I0910 18:50:05.592564   64489 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:50:05.592573   64489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:50:05.592626   64489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:50:05.592737   64489 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:50:05.592746   64489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:50:05.592768   64489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:50:05.592841   64489 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:50:05.592848   64489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:50:05.592866   64489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:50:05.592929   64489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-432422 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-432422]
	I0910 18:50:05.641136   64489 provision.go:177] copyRemoteCerts
	I0910 18:50:05.641183   64489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:50:05.641204   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:05.643957   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.644264   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:05.644308   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.644455   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:05.644651   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.644820   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:05.644981   64489 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:50:05.728001   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:50:05.755593   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0910 18:50:05.783331   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 18:50:05.813590   64489 provision.go:87] duration metric: took 227.176465ms to configureAuth
	I0910 18:50:05.813619   64489 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:50:05.813808   64489 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:50:05.813897   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:05.816749   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.817120   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:05.817146   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:05.817339   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:05.817521   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.817677   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:05.817859   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:05.818022   64489 main.go:141] libmachine: Using SSH client type: native
	I0910 18:50:05.818234   64489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:50:05.818254   64489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:50:06.056386   64489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:50:06.056416   64489 main.go:141] libmachine: Checking connection to Docker...
	I0910 18:50:06.056428   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetURL
	I0910 18:50:06.057721   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using libvirt version 6000000
	I0910 18:50:06.061363   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.061804   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:06.061829   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.061975   64489 main.go:141] libmachine: Docker is up and running!
	I0910 18:50:06.061986   64489 main.go:141] libmachine: Reticulating splines...
	I0910 18:50:06.061993   64489 client.go:171] duration metric: took 26.690055539s to LocalClient.Create
	I0910 18:50:06.062027   64489 start.go:167] duration metric: took 26.690110687s to libmachine.API.Create "old-k8s-version-432422"
	I0910 18:50:06.062036   64489 start.go:293] postStartSetup for "old-k8s-version-432422" (driver="kvm2")
	I0910 18:50:06.062055   64489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:50:06.062080   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:50:06.062290   64489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:50:06.062309   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:06.065120   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.065480   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:06.065507   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.065668   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:06.065854   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:06.066002   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:06.066148   64489 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:50:06.149274   64489 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:50:06.155539   64489 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:50:06.155565   64489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:50:06.155637   64489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:50:06.155729   64489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:50:06.155838   64489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:50:06.167970   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:50:06.200009   64489 start.go:296] duration metric: took 137.959029ms for postStartSetup
	I0910 18:50:06.200067   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:50:06.200750   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:50:06.203240   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.203661   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:06.203690   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.203962   64489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:50:06.204187   64489 start.go:128] duration metric: took 26.854042419s to createHost
	I0910 18:50:06.204214   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:06.206987   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.207435   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:06.207459   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.207621   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:06.207799   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:06.207981   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:06.208154   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:06.208326   64489 main.go:141] libmachine: Using SSH client type: native
	I0910 18:50:06.208525   64489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:50:06.208541   64489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:50:06.314818   64489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994206.266053112
	
	I0910 18:50:06.314838   64489 fix.go:216] guest clock: 1725994206.266053112
	I0910 18:50:06.314844   64489 fix.go:229] Guest: 2024-09-10 18:50:06.266053112 +0000 UTC Remote: 2024-09-10 18:50:06.2042003 +0000 UTC m=+53.655371379 (delta=61.852812ms)
	I0910 18:50:06.314860   64489 fix.go:200] guest clock delta is within tolerance: 61.852812ms
	I0910 18:50:06.314864   64489 start.go:83] releasing machines lock for "old-k8s-version-432422", held for 26.96490779s
	I0910 18:50:06.314890   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:50:06.315198   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:50:06.318310   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.318799   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:06.318827   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.319174   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:50:06.319774   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:50:06.319947   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:50:06.320043   64489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:50:06.320086   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:06.320196   64489 ssh_runner.go:195] Run: cat /version.json
	I0910 18:50:06.320224   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:50:06.323422   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.323611   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.323785   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:06.323805   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.323933   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:06.323949   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:06.324120   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:06.324317   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:06.324512   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:06.324531   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:50:06.324714   64489 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:50:06.324781   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:50:06.324989   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:50:06.325198   64489 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:50:06.410557   64489 ssh_runner.go:195] Run: systemctl --version
	I0910 18:50:06.438751   64489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:50:06.599493   64489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:50:06.606214   64489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:50:06.606275   64489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:50:06.624156   64489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:50:06.624196   64489 start.go:495] detecting cgroup driver to use...
	I0910 18:50:06.624271   64489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:50:06.640542   64489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:50:06.657630   64489 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:50:06.657693   64489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:50:06.675813   64489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:50:06.690465   64489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:50:06.837566   64489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:50:06.999763   64489 docker.go:233] disabling docker service ...
	I0910 18:50:06.999833   64489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:50:07.018792   64489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:50:07.032520   64489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:50:07.180608   64489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:50:07.322691   64489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:50:07.337063   64489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:50:07.358633   64489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0910 18:50:07.358713   64489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:50:07.369660   64489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:50:07.369724   64489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:50:07.382990   64489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:50:07.397925   64489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:50:07.409096   64489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:50:07.420902   64489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:50:07.432116   64489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:50:07.432172   64489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:50:07.447825   64489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:50:07.460651   64489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:50:07.616863   64489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:50:07.730202   64489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:50:07.730267   64489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:50:07.735672   64489 start.go:563] Will wait 60s for crictl version
	I0910 18:50:07.735721   64489 ssh_runner.go:195] Run: which crictl
	I0910 18:50:07.740015   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:50:07.792121   64489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:50:07.792217   64489 ssh_runner.go:195] Run: crio --version
	I0910 18:50:07.826317   64489 ssh_runner.go:195] Run: crio --version
	I0910 18:50:07.860351   64489 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0910 18:50:07.861730   64489 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:50:07.864986   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:07.865373   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:49:56 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:50:07.865404   64489 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:50:07.865588   64489 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 18:50:07.870163   64489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:50:07.884325   64489 kubeadm.go:883] updating cluster {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:50:07.884435   64489 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:50:07.884491   64489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:50:07.916945   64489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:50:07.917011   64489 ssh_runner.go:195] Run: which lz4
	I0910 18:50:07.921295   64489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:50:07.925963   64489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:50:07.925992   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0910 18:50:09.752118   64489 crio.go:462] duration metric: took 1.830854734s to copy over tarball
	I0910 18:50:09.752179   64489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:50:12.626425   64489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.874224337s)
	I0910 18:50:12.626451   64489 crio.go:469] duration metric: took 2.874302825s to extract the tarball
	I0910 18:50:12.626459   64489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:50:12.676203   64489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:50:12.747373   64489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:50:12.747398   64489 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:50:12.747472   64489 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:50:12.747753   64489 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:50:12.747776   64489 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:50:12.747805   64489 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:50:12.747969   64489 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0910 18:50:12.747752   64489 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:50:12.748508   64489 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0910 18:50:12.747973   64489 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:50:12.749819   64489 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:50:12.751687   64489 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:50:12.751917   64489 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:50:12.751979   64489 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:50:12.752200   64489 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:50:12.752762   64489 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 18:50:12.753292   64489 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:50:12.754019   64489 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0910 18:50:12.922101   64489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:50:12.933663   64489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0910 18:50:12.936571   64489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0910 18:50:12.942069   64489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 18:50:12.954897   64489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:50:12.981260   64489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:50:13.021348   64489 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0910 18:50:13.021402   64489 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:50:13.021458   64489 ssh_runner.go:195] Run: which crictl
	I0910 18:50:13.071365   64489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:50:13.111803   64489 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0910 18:50:13.111832   64489 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0910 18:50:13.111852   64489 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0910 18:50:13.111866   64489 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0910 18:50:13.111875   64489 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:50:13.111893   64489 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0910 18:50:13.111899   64489 ssh_runner.go:195] Run: which crictl
	I0910 18:50:13.111912   64489 ssh_runner.go:195] Run: which crictl
	I0910 18:50:13.111931   64489 ssh_runner.go:195] Run: which crictl
	I0910 18:50:13.150826   64489 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0910 18:50:13.150872   64489 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:50:13.150920   64489 ssh_runner.go:195] Run: which crictl
	I0910 18:50:13.164545   64489 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0910 18:50:13.164587   64489 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:50:13.164593   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:50:13.164615   64489 ssh_runner.go:195] Run: which crictl
	I0910 18:50:13.164684   64489 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0910 18:50:13.164703   64489 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:50:13.164752   64489 ssh_runner.go:195] Run: which crictl
	I0910 18:50:13.164824   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:50:13.164890   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:50:13.164908   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:50:13.164944   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:50:13.288692   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:50:13.288731   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:50:13.288778   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:50:13.288800   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:50:13.288850   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:50:13.288881   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:50:13.288920   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:50:13.457516   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:50:13.457759   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:50:13.465329   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:50:13.465375   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:50:13.465410   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:50:13.465444   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:50:13.465475   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:50:13.550694   64489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:50:13.630881   64489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0910 18:50:13.630944   64489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0910 18:50:13.652149   64489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0910 18:50:13.652210   64489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0910 18:50:13.656101   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:50:13.656218   64489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:50:13.656238   64489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0910 18:50:13.797850   64489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0910 18:50:13.797892   64489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0910 18:50:13.797950   64489 cache_images.go:92] duration metric: took 1.050537269s to LoadCachedImages
	W0910 18:50:13.798044   64489 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0910 18:50:13.798063   64489 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0910 18:50:13.798229   64489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-432422 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:50:13.798317   64489 ssh_runner.go:195] Run: crio config
	I0910 18:50:13.851761   64489 cni.go:84] Creating CNI manager for ""
	I0910 18:50:13.851788   64489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:50:13.851803   64489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:50:13.851829   64489 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-432422 NodeName:old-k8s-version-432422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 18:50:13.852003   64489 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-432422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:50:13.852073   64489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0910 18:50:13.864416   64489 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:50:13.864484   64489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:50:13.877014   64489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0910 18:50:13.906631   64489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:50:13.928290   64489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0910 18:50:13.951186   64489 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0910 18:50:13.956186   64489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:50:13.973271   64489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:50:14.114329   64489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:50:14.133803   64489 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422 for IP: 192.168.61.51
	I0910 18:50:14.133840   64489 certs.go:194] generating shared ca certs ...
	I0910 18:50:14.133862   64489 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:50:14.134031   64489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:50:14.134094   64489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:50:14.134112   64489 certs.go:256] generating profile certs ...
	I0910 18:50:14.134178   64489 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key
	I0910 18:50:14.134199   64489 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.crt with IP's: []
	I0910 18:50:14.304753   64489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.crt ...
	I0910 18:50:14.304786   64489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.crt: {Name:mk3e4913688167cd28da912f471a1766948fdce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:50:14.304982   64489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key ...
	I0910 18:50:14.304998   64489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key: {Name:mkff0afc833901d49863bd4d1ca2be230bd9196f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:50:14.305135   64489 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b
	I0910 18:50:14.305160   64489 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt.da6b542b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.51]
	I0910 18:50:14.363110   64489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt.da6b542b ...
	I0910 18:50:14.363143   64489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt.da6b542b: {Name:mkdcb269dd3b3decf2eb01d21cf040c9fd7709eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:50:14.363341   64489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b ...
	I0910 18:50:14.363366   64489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b: {Name:mk4d2290ff03b491d48302797d5584bf8b2b4c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:50:14.363469   64489 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt.da6b542b -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt
	I0910 18:50:14.363586   64489 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key
	I0910 18:50:14.363686   64489 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key
	I0910 18:50:14.363708   64489 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt with IP's: []
	I0910 18:50:14.554209   64489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt ...
	I0910 18:50:14.554241   64489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt: {Name:mk05e0c2d078a4b2f4821797984dbc9a4c835c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:50:14.554417   64489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key ...
	I0910 18:50:14.554430   64489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key: {Name:mk0fcad8b722b9171181895f35f024b7eb5b664b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:50:14.554665   64489 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:50:14.554712   64489 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:50:14.554726   64489 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:50:14.554756   64489 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:50:14.554788   64489 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:50:14.554817   64489 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:50:14.554870   64489 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:50:14.555687   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:50:14.591803   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:50:14.627561   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:50:14.663807   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:50:14.696441   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 18:50:14.726578   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:50:14.754693   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:50:14.779347   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:50:14.812455   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:50:14.845468   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:50:14.874225   64489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:50:14.901860   64489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:50:14.923324   64489 ssh_runner.go:195] Run: openssl version
	I0910 18:50:14.929624   64489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:50:14.942160   64489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:50:14.946961   64489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:50:14.947002   64489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:50:14.953416   64489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:50:14.965068   64489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:50:14.977765   64489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:50:14.984511   64489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:50:14.984585   64489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:50:14.992954   64489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:50:15.013247   64489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:50:15.044912   64489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:50:15.050032   64489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:50:15.050094   64489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:50:15.059512   64489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:50:15.074995   64489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:50:15.080521   64489 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 18:50:15.080590   64489 kubeadm.go:392] StartCluster: {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:50:15.080682   64489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:50:15.080760   64489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:50:15.138009   64489 cri.go:89] found id: ""
	I0910 18:50:15.138080   64489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:50:15.152824   64489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:50:15.168278   64489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:50:15.178709   64489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:50:15.178731   64489 kubeadm.go:157] found existing configuration files:
	
	I0910 18:50:15.178779   64489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:50:15.194020   64489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:50:15.194080   64489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:50:15.214053   64489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:50:15.227437   64489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:50:15.227497   64489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:50:15.239500   64489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:50:15.250289   64489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:50:15.250356   64489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:50:15.260757   64489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:50:15.271953   64489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:50:15.272017   64489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:50:15.283690   64489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 18:50:15.403132   64489 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 18:50:15.403276   64489 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 18:50:15.581168   64489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 18:50:15.581350   64489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 18:50:15.581539   64489 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 18:50:15.820390   64489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 18:50:15.890391   64489 out.go:235]   - Generating certificates and keys ...
	I0910 18:50:15.890495   64489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 18:50:15.890606   64489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 18:50:16.074414   64489 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 18:50:16.489438   64489 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 18:50:16.544069   64489 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 18:50:16.632006   64489 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 18:50:16.734079   64489 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 18:50:16.734480   64489 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-432422] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0910 18:50:17.071311   64489 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 18:50:17.071749   64489 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-432422] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0910 18:50:17.134351   64489 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 18:50:17.556579   64489 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 18:50:18.027747   64489 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 18:50:18.027984   64489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 18:50:18.116674   64489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 18:50:18.289370   64489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 18:50:18.452069   64489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 18:50:18.542363   64489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 18:50:18.564061   64489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 18:50:18.567445   64489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 18:50:18.567511   64489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 18:50:18.709671   64489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 18:50:18.711471   64489 out.go:235]   - Booting up control plane ...
	I0910 18:50:18.711609   64489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 18:50:18.716283   64489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 18:50:18.718021   64489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 18:50:18.718879   64489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 18:50:18.723505   64489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 18:50:58.678053   64489 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 18:50:58.680398   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:50:58.680640   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:51:03.679528   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:51:03.679776   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:51:13.679240   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:51:13.679540   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:51:33.680043   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:51:33.680324   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:52:13.682135   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:52:13.682680   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:52:13.682705   64489 kubeadm.go:310] 
	I0910 18:52:13.682815   64489 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 18:52:13.682915   64489 kubeadm.go:310] 		timed out waiting for the condition
	I0910 18:52:13.682929   64489 kubeadm.go:310] 
	I0910 18:52:13.683018   64489 kubeadm.go:310] 	This error is likely caused by:
	I0910 18:52:13.683092   64489 kubeadm.go:310] 		- The kubelet is not running
	I0910 18:52:13.683357   64489 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 18:52:13.683390   64489 kubeadm.go:310] 
	I0910 18:52:13.683592   64489 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 18:52:13.683679   64489 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 18:52:13.683769   64489 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 18:52:13.683782   64489 kubeadm.go:310] 
	I0910 18:52:13.684038   64489 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 18:52:13.684237   64489 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 18:52:13.684256   64489 kubeadm.go:310] 
	I0910 18:52:13.684514   64489 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 18:52:13.684769   64489 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 18:52:13.684914   64489 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 18:52:13.685126   64489 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 18:52:13.685167   64489 kubeadm.go:310] 
	I0910 18:52:13.685512   64489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 18:52:13.685965   64489 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 18:52:13.686093   64489 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0910 18:52:13.686171   64489 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-432422] and IPs [192.168.61.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-432422] and IPs [192.168.61.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-432422] and IPs [192.168.61.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-432422] and IPs [192.168.61.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0910 18:52:13.686219   64489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 18:52:15.158995   64489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.472746649s)
	I0910 18:52:15.159081   64489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:52:15.173131   64489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:52:15.182600   64489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:52:15.182615   64489 kubeadm.go:157] found existing configuration files:
	
	I0910 18:52:15.182651   64489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:52:15.191725   64489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:52:15.191771   64489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:52:15.200752   64489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:52:15.209239   64489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:52:15.209289   64489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:52:15.218089   64489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:52:15.226653   64489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:52:15.226695   64489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:52:15.235566   64489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:52:15.244397   64489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:52:15.244441   64489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:52:15.253392   64489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 18:52:15.451011   64489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 18:54:12.056601   64489 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 18:54:12.056703   64489 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 18:54:12.058152   64489 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 18:54:12.058209   64489 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 18:54:12.058279   64489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 18:54:12.058406   64489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 18:54:12.058544   64489 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 18:54:12.058639   64489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 18:54:12.060360   64489 out.go:235]   - Generating certificates and keys ...
	I0910 18:54:12.060443   64489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 18:54:12.060508   64489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 18:54:12.060600   64489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 18:54:12.060671   64489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 18:54:12.060758   64489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 18:54:12.060832   64489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 18:54:12.060916   64489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 18:54:12.060988   64489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 18:54:12.061057   64489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 18:54:12.061149   64489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 18:54:12.061184   64489 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 18:54:12.061228   64489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 18:54:12.061272   64489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 18:54:12.061318   64489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 18:54:12.061366   64489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 18:54:12.061410   64489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 18:54:12.061500   64489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 18:54:12.061623   64489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 18:54:12.061674   64489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 18:54:12.061759   64489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 18:54:12.063675   64489 out.go:235]   - Booting up control plane ...
	I0910 18:54:12.063764   64489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 18:54:12.063833   64489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 18:54:12.063890   64489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 18:54:12.063967   64489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 18:54:12.064107   64489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 18:54:12.064149   64489 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 18:54:12.064207   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:54:12.064382   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:54:12.064448   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:54:12.064603   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:54:12.064662   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:54:12.064826   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:54:12.064887   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:54:12.065051   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:54:12.065134   64489 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 18:54:12.065293   64489 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 18:54:12.065300   64489 kubeadm.go:310] 
	I0910 18:54:12.065339   64489 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 18:54:12.065371   64489 kubeadm.go:310] 		timed out waiting for the condition
	I0910 18:54:12.065377   64489 kubeadm.go:310] 
	I0910 18:54:12.065412   64489 kubeadm.go:310] 	This error is likely caused by:
	I0910 18:54:12.065439   64489 kubeadm.go:310] 		- The kubelet is not running
	I0910 18:54:12.065522   64489 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 18:54:12.065529   64489 kubeadm.go:310] 
	I0910 18:54:12.065607   64489 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 18:54:12.065634   64489 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 18:54:12.065665   64489 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 18:54:12.065671   64489 kubeadm.go:310] 
	I0910 18:54:12.065752   64489 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 18:54:12.065818   64489 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 18:54:12.065824   64489 kubeadm.go:310] 
	I0910 18:54:12.065915   64489 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 18:54:12.066002   64489 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 18:54:12.066072   64489 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 18:54:12.066132   64489 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 18:54:12.066148   64489 kubeadm.go:310] 
	I0910 18:54:12.066190   64489 kubeadm.go:394] duration metric: took 3m56.985607608s to StartCluster
	I0910 18:54:12.066239   64489 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 18:54:12.066289   64489 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 18:54:12.124665   64489 cri.go:89] found id: ""
	I0910 18:54:12.124690   64489 logs.go:276] 0 containers: []
	W0910 18:54:12.124702   64489 logs.go:278] No container was found matching "kube-apiserver"
	I0910 18:54:12.124709   64489 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 18:54:12.124756   64489 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 18:54:12.162247   64489 cri.go:89] found id: ""
	I0910 18:54:12.162276   64489 logs.go:276] 0 containers: []
	W0910 18:54:12.162286   64489 logs.go:278] No container was found matching "etcd"
	I0910 18:54:12.162293   64489 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 18:54:12.162362   64489 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 18:54:12.211296   64489 cri.go:89] found id: ""
	I0910 18:54:12.211323   64489 logs.go:276] 0 containers: []
	W0910 18:54:12.211333   64489 logs.go:278] No container was found matching "coredns"
	I0910 18:54:12.211340   64489 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 18:54:12.211387   64489 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 18:54:12.248872   64489 cri.go:89] found id: ""
	I0910 18:54:12.248895   64489 logs.go:276] 0 containers: []
	W0910 18:54:12.248905   64489 logs.go:278] No container was found matching "kube-scheduler"
	I0910 18:54:12.248915   64489 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 18:54:12.248971   64489 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 18:54:12.287331   64489 cri.go:89] found id: ""
	I0910 18:54:12.287356   64489 logs.go:276] 0 containers: []
	W0910 18:54:12.287367   64489 logs.go:278] No container was found matching "kube-proxy"
	I0910 18:54:12.287375   64489 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 18:54:12.287428   64489 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 18:54:12.322251   64489 cri.go:89] found id: ""
	I0910 18:54:12.322280   64489 logs.go:276] 0 containers: []
	W0910 18:54:12.322289   64489 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 18:54:12.322295   64489 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 18:54:12.322342   64489 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 18:54:12.357012   64489 cri.go:89] found id: ""
	I0910 18:54:12.357037   64489 logs.go:276] 0 containers: []
	W0910 18:54:12.357047   64489 logs.go:278] No container was found matching "kindnet"
	I0910 18:54:12.357065   64489 logs.go:123] Gathering logs for CRI-O ...
	I0910 18:54:12.357093   64489 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 18:54:12.462825   64489 logs.go:123] Gathering logs for container status ...
	I0910 18:54:12.462859   64489 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 18:54:12.501083   64489 logs.go:123] Gathering logs for kubelet ...
	I0910 18:54:12.501108   64489 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 18:54:12.548434   64489 logs.go:123] Gathering logs for dmesg ...
	I0910 18:54:12.548462   64489 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 18:54:12.561900   64489 logs.go:123] Gathering logs for describe nodes ...
	I0910 18:54:12.561927   64489 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 18:54:12.666143   64489 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0910 18:54:12.666166   64489 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0910 18:54:12.666210   64489 out.go:270] * 
	* 
	W0910 18:54:12.666267   64489 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 18:54:12.666285   64489 out.go:270] * 
	* 
	W0910 18:54:12.667133   64489 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:54:12.669767   64489 out.go:201] 
	W0910 18:54:12.670752   64489 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 18:54:12.670800   64489 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0910 18:54:12.670828   64489 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0910 18:54:12.672677   64489 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-432422 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 6 (219.53042ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:12.932218   71225 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-432422" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (300.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-836868 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-836868 --alsologtostderr -v=3: exit status 82 (2m0.501610415s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-836868"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:51:40.670370   70209 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:51:40.670645   70209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:51:40.670656   70209 out.go:358] Setting ErrFile to fd 2...
	I0910 18:51:40.670660   70209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:51:40.670860   70209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:51:40.671115   70209 out.go:352] Setting JSON to false
	I0910 18:51:40.671208   70209 mustload.go:65] Loading cluster: embed-certs-836868
	I0910 18:51:40.671542   70209 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:51:40.671616   70209 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/config.json ...
	I0910 18:51:40.671845   70209 mustload.go:65] Loading cluster: embed-certs-836868
	I0910 18:51:40.671963   70209 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:51:40.672004   70209 stop.go:39] StopHost: embed-certs-836868
	I0910 18:51:40.672376   70209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:51:40.672421   70209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:51:40.687385   70209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42513
	I0910 18:51:40.687923   70209 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:51:40.688498   70209 main.go:141] libmachine: Using API Version  1
	I0910 18:51:40.688526   70209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:51:40.688863   70209 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:51:40.691309   70209 out.go:177] * Stopping node "embed-certs-836868"  ...
	I0910 18:51:40.692674   70209 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0910 18:51:40.692713   70209 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 18:51:40.692901   70209 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0910 18:51:40.692924   70209 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 18:51:40.695794   70209 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:51:40.696235   70209 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 19:50:46 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 18:51:40.696270   70209 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:51:40.696424   70209 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 18:51:40.696590   70209 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 18:51:40.696755   70209 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 18:51:40.696892   70209 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 18:51:40.802016   70209 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0910 18:51:40.870385   70209 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0910 18:51:40.930286   70209 main.go:141] libmachine: Stopping "embed-certs-836868"...
	I0910 18:51:40.930320   70209 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 18:51:40.932238   70209 main.go:141] libmachine: (embed-certs-836868) Calling .Stop
	I0910 18:51:40.936482   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 0/120
	I0910 18:51:41.938067   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 1/120
	I0910 18:51:42.939425   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 2/120
	I0910 18:51:43.941158   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 3/120
	I0910 18:51:44.942296   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 4/120
	I0910 18:51:45.944112   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 5/120
	I0910 18:51:46.945554   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 6/120
	I0910 18:51:47.947650   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 7/120
	I0910 18:51:48.948948   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 8/120
	I0910 18:51:49.950316   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 9/120
	I0910 18:51:50.951687   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 10/120
	I0910 18:51:51.953014   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 11/120
	I0910 18:51:52.954699   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 12/120
	I0910 18:51:53.956270   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 13/120
	I0910 18:51:54.957898   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 14/120
	I0910 18:51:55.960020   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 15/120
	I0910 18:51:56.961479   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 16/120
	I0910 18:51:57.963118   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 17/120
	I0910 18:51:58.965346   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 18/120
	I0910 18:51:59.967013   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 19/120
	I0910 18:52:00.968200   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 20/120
	I0910 18:52:01.969559   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 21/120
	I0910 18:52:02.971413   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 22/120
	I0910 18:52:03.972659   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 23/120
	I0910 18:52:04.973953   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 24/120
	I0910 18:52:05.975729   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 25/120
	I0910 18:52:06.977252   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 26/120
	I0910 18:52:07.978622   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 27/120
	I0910 18:52:08.980097   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 28/120
	I0910 18:52:09.981829   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 29/120
	I0910 18:52:10.983859   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 30/120
	I0910 18:52:11.985355   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 31/120
	I0910 18:52:12.986601   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 32/120
	I0910 18:52:13.988003   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 33/120
	I0910 18:52:14.989173   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 34/120
	I0910 18:52:15.990844   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 35/120
	I0910 18:52:16.992097   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 36/120
	I0910 18:52:17.993510   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 37/120
	I0910 18:52:18.994820   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 38/120
	I0910 18:52:19.996397   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 39/120
	I0910 18:52:20.998912   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 40/120
	I0910 18:52:22.000368   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 41/120
	I0910 18:52:23.002019   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 42/120
	I0910 18:52:24.003415   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 43/120
	I0910 18:52:25.004793   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 44/120
	I0910 18:52:26.006745   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 45/120
	I0910 18:52:27.008185   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 46/120
	I0910 18:52:28.009612   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 47/120
	I0910 18:52:29.011060   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 48/120
	I0910 18:52:30.012393   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 49/120
	I0910 18:52:31.014686   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 50/120
	I0910 18:52:32.016365   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 51/120
	I0910 18:52:33.017879   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 52/120
	I0910 18:52:34.019201   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 53/120
	I0910 18:52:35.020821   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 54/120
	I0910 18:52:36.022762   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 55/120
	I0910 18:52:37.023964   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 56/120
	I0910 18:52:38.025305   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 57/120
	I0910 18:52:39.027476   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 58/120
	I0910 18:52:40.028812   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 59/120
	I0910 18:52:41.030953   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 60/120
	I0910 18:52:42.032241   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 61/120
	I0910 18:52:43.033703   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 62/120
	I0910 18:52:44.034922   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 63/120
	I0910 18:52:45.036234   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 64/120
	I0910 18:52:46.038003   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 65/120
	I0910 18:52:47.039290   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 66/120
	I0910 18:52:48.040628   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 67/120
	I0910 18:52:49.042078   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 68/120
	I0910 18:52:50.043312   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 69/120
	I0910 18:52:51.045459   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 70/120
	I0910 18:52:52.046939   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 71/120
	I0910 18:52:53.048423   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 72/120
	I0910 18:52:54.049924   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 73/120
	I0910 18:52:55.051307   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 74/120
	I0910 18:52:56.053343   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 75/120
	I0910 18:52:57.055763   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 76/120
	I0910 18:52:58.057104   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 77/120
	I0910 18:52:59.058473   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 78/120
	I0910 18:53:00.059854   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 79/120
	I0910 18:53:01.061950   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 80/120
	I0910 18:53:02.063372   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 81/120
	I0910 18:53:03.064667   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 82/120
	I0910 18:53:04.066017   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 83/120
	I0910 18:53:05.067391   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 84/120
	I0910 18:53:06.069306   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 85/120
	I0910 18:53:07.070631   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 86/120
	I0910 18:53:08.072032   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 87/120
	I0910 18:53:09.073391   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 88/120
	I0910 18:53:10.074761   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 89/120
	I0910 18:53:11.076966   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 90/120
	I0910 18:53:12.078263   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 91/120
	I0910 18:53:13.079613   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 92/120
	I0910 18:53:14.081060   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 93/120
	I0910 18:53:15.082406   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 94/120
	I0910 18:53:16.084406   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 95/120
	I0910 18:53:17.085748   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 96/120
	I0910 18:53:18.087772   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 97/120
	I0910 18:53:19.089014   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 98/120
	I0910 18:53:20.090493   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 99/120
	I0910 18:53:21.092759   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 100/120
	I0910 18:53:22.094262   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 101/120
	I0910 18:53:23.095656   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 102/120
	I0910 18:53:24.097158   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 103/120
	I0910 18:53:25.098568   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 104/120
	I0910 18:53:26.100922   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 105/120
	I0910 18:53:27.102511   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 106/120
	I0910 18:53:28.103867   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 107/120
	I0910 18:53:29.105137   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 108/120
	I0910 18:53:30.106361   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 109/120
	I0910 18:53:31.108570   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 110/120
	I0910 18:53:32.110328   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 111/120
	I0910 18:53:33.111670   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 112/120
	I0910 18:53:34.112965   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 113/120
	I0910 18:53:35.114226   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 114/120
	I0910 18:53:36.116106   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 115/120
	I0910 18:53:37.117431   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 116/120
	I0910 18:53:38.118867   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 117/120
	I0910 18:53:39.120191   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 118/120
	I0910 18:53:40.121429   70209 main.go:141] libmachine: (embed-certs-836868) Waiting for machine to stop 119/120
	I0910 18:53:41.122619   70209 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0910 18:53:41.122668   70209 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0910 18:53:41.124513   70209 out.go:201] 
	W0910 18:53:41.125919   70209 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0910 18:53:41.125946   70209 out.go:270] * 
	* 
	W0910 18:53:41.129333   70209 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:53:41.130757   70209 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-836868 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-836868 -n embed-certs-836868
E0910 18:53:43.066455   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:43.072848   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:43.084182   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:43.105525   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:43.146905   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:43.228340   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:43.390328   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:43.712023   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:44.353587   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:45.635677   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:48.197582   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:53.319591   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:56.538412   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-836868 -n embed-certs-836868: exit status 3 (18.645276402s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:53:59.777453   70899 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	E0910 18:53:59.777472   70899 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-836868" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-347802 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-347802 --alsologtostderr -v=3: exit status 82 (2m0.494729208s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-347802"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:52:00.825172   70416 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:52:00.825296   70416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:52:00.825307   70416 out.go:358] Setting ErrFile to fd 2...
	I0910 18:52:00.825312   70416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:52:00.825511   70416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:52:00.825753   70416 out.go:352] Setting JSON to false
	I0910 18:52:00.825840   70416 mustload.go:65] Loading cluster: no-preload-347802
	I0910 18:52:00.826186   70416 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:52:00.826296   70416 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/config.json ...
	I0910 18:52:00.826463   70416 mustload.go:65] Loading cluster: no-preload-347802
	I0910 18:52:00.826591   70416 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:52:00.826626   70416 stop.go:39] StopHost: no-preload-347802
	I0910 18:52:00.827037   70416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:52:00.827093   70416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:52:00.841664   70416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0910 18:52:00.842032   70416 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:52:00.842589   70416 main.go:141] libmachine: Using API Version  1
	I0910 18:52:00.842615   70416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:52:00.842958   70416 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:52:00.845062   70416 out.go:177] * Stopping node "no-preload-347802"  ...
	I0910 18:52:00.846689   70416 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0910 18:52:00.846712   70416 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:52:00.846933   70416 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0910 18:52:00.846957   70416 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:52:00.849783   70416 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:52:00.850287   70416 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:50:22 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:52:00.850319   70416 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:52:00.850470   70416 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:52:00.850644   70416 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:52:00.850803   70416 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:52:00.850941   70416 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:52:00.951452   70416 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0910 18:52:01.010739   70416 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0910 18:52:01.077194   70416 main.go:141] libmachine: Stopping "no-preload-347802"...
	I0910 18:52:01.077233   70416 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 18:52:01.078667   70416 main.go:141] libmachine: (no-preload-347802) Calling .Stop
	I0910 18:52:01.082542   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 0/120
	I0910 18:52:02.083825   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 1/120
	I0910 18:52:03.085333   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 2/120
	I0910 18:52:04.086574   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 3/120
	I0910 18:52:05.087746   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 4/120
	I0910 18:52:06.089776   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 5/120
	I0910 18:52:07.091930   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 6/120
	I0910 18:52:08.093376   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 7/120
	I0910 18:52:09.094824   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 8/120
	I0910 18:52:10.096408   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 9/120
	I0910 18:52:11.098743   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 10/120
	I0910 18:52:12.100156   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 11/120
	I0910 18:52:13.101533   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 12/120
	I0910 18:52:14.104015   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 13/120
	I0910 18:52:15.105271   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 14/120
	I0910 18:52:16.107262   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 15/120
	I0910 18:52:17.108899   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 16/120
	I0910 18:52:18.110241   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 17/120
	I0910 18:52:19.111706   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 18/120
	I0910 18:52:20.113037   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 19/120
	I0910 18:52:21.115281   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 20/120
	I0910 18:52:22.116527   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 21/120
	I0910 18:52:23.117956   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 22/120
	I0910 18:52:24.119547   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 23/120
	I0910 18:52:25.120906   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 24/120
	I0910 18:52:26.122792   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 25/120
	I0910 18:52:27.124326   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 26/120
	I0910 18:52:28.125621   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 27/120
	I0910 18:52:29.126987   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 28/120
	I0910 18:52:30.128403   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 29/120
	I0910 18:52:31.130733   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 30/120
	I0910 18:52:32.132077   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 31/120
	I0910 18:52:33.133668   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 32/120
	I0910 18:52:34.134980   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 33/120
	I0910 18:52:35.136617   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 34/120
	I0910 18:52:36.138244   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 35/120
	I0910 18:52:37.139630   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 36/120
	I0910 18:52:38.140792   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 37/120
	I0910 18:52:39.142070   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 38/120
	I0910 18:52:40.143254   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 39/120
	I0910 18:52:41.145270   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 40/120
	I0910 18:52:42.146569   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 41/120
	I0910 18:52:43.147960   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 42/120
	I0910 18:52:44.149195   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 43/120
	I0910 18:52:45.150398   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 44/120
	I0910 18:52:46.152179   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 45/120
	I0910 18:52:47.153435   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 46/120
	I0910 18:52:48.154688   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 47/120
	I0910 18:52:49.155924   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 48/120
	I0910 18:52:50.157086   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 49/120
	I0910 18:52:51.158933   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 50/120
	I0910 18:52:52.160338   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 51/120
	I0910 18:52:53.161567   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 52/120
	I0910 18:52:54.163110   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 53/120
	I0910 18:52:55.164548   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 54/120
	I0910 18:52:56.166367   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 55/120
	I0910 18:52:57.167556   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 56/120
	I0910 18:52:58.168829   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 57/120
	I0910 18:52:59.170316   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 58/120
	I0910 18:53:00.171558   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 59/120
	I0910 18:53:01.173873   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 60/120
	I0910 18:53:02.175218   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 61/120
	I0910 18:53:03.177039   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 62/120
	I0910 18:53:04.178367   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 63/120
	I0910 18:53:05.179742   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 64/120
	I0910 18:53:06.181999   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 65/120
	I0910 18:53:07.183252   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 66/120
	I0910 18:53:08.184637   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 67/120
	I0910 18:53:09.185957   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 68/120
	I0910 18:53:10.187555   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 69/120
	I0910 18:53:11.189901   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 70/120
	I0910 18:53:12.191593   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 71/120
	I0910 18:53:13.192955   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 72/120
	I0910 18:53:14.194191   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 73/120
	I0910 18:53:15.195475   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 74/120
	I0910 18:53:16.197332   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 75/120
	I0910 18:53:17.198851   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 76/120
	I0910 18:53:18.200363   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 77/120
	I0910 18:53:19.201703   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 78/120
	I0910 18:53:20.202896   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 79/120
	I0910 18:53:21.204900   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 80/120
	I0910 18:53:22.206128   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 81/120
	I0910 18:53:23.207341   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 82/120
	I0910 18:53:24.208624   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 83/120
	I0910 18:53:25.209938   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 84/120
	I0910 18:53:26.211924   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 85/120
	I0910 18:53:27.213304   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 86/120
	I0910 18:53:28.214777   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 87/120
	I0910 18:53:29.216101   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 88/120
	I0910 18:53:30.217498   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 89/120
	I0910 18:53:31.219658   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 90/120
	I0910 18:53:32.220971   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 91/120
	I0910 18:53:33.222226   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 92/120
	I0910 18:53:34.223434   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 93/120
	I0910 18:53:35.224820   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 94/120
	I0910 18:53:36.226912   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 95/120
	I0910 18:53:37.228027   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 96/120
	I0910 18:53:38.229562   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 97/120
	I0910 18:53:39.230820   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 98/120
	I0910 18:53:40.232311   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 99/120
	I0910 18:53:41.234470   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 100/120
	I0910 18:53:42.235772   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 101/120
	I0910 18:53:43.237012   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 102/120
	I0910 18:53:44.238263   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 103/120
	I0910 18:53:45.239607   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 104/120
	I0910 18:53:46.241312   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 105/120
	I0910 18:53:47.242637   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 106/120
	I0910 18:53:48.243850   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 107/120
	I0910 18:53:49.245304   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 108/120
	I0910 18:53:50.246475   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 109/120
	I0910 18:53:51.248419   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 110/120
	I0910 18:53:52.249858   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 111/120
	I0910 18:53:53.251244   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 112/120
	I0910 18:53:54.252533   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 113/120
	I0910 18:53:55.253928   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 114/120
	I0910 18:53:56.255663   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 115/120
	I0910 18:53:57.257277   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 116/120
	I0910 18:53:58.258571   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 117/120
	I0910 18:53:59.259911   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 118/120
	I0910 18:54:00.261134   70416 main.go:141] libmachine: (no-preload-347802) Waiting for machine to stop 119/120
	I0910 18:54:01.262492   70416 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0910 18:54:01.262563   70416 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0910 18:54:01.264433   70416 out.go:201] 
	W0910 18:54:01.265698   70416 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0910 18:54:01.265717   70416 out.go:270] * 
	* 
	W0910 18:54:01.269158   70416 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:54:01.270476   70416 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-347802 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-347802 -n no-preload-347802
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-347802 -n no-preload-347802: exit status 3 (18.473635851s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:19.745500   71026 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host
	E0910 18:54:19.745522   71026 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-347802" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-557504 --alsologtostderr -v=3
E0910 18:52:09.913041   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:15.034631   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:15.663485   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:15.669837   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:15.681205   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:15.702553   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:15.743981   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:15.825423   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:15.987158   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:16.309455   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:16.951567   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:18.233876   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:20.796134   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:25.276195   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:25.918423   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:36.160634   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:45.758558   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:55.870820   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:55.877132   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:55.888476   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:55.909812   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:55.951268   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:56.032796   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:56.194664   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:56.516497   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:56.642170   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:57.158250   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:58.242211   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:58.440345   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:01.001639   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:06.123894   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:16.365285   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:26.719932   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:36.847279   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:53:37.603959   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-557504 --alsologtostderr -v=3: exit status 82 (2m0.458506547s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-557504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:52:09.945296   70517 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:52:09.945397   70517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:52:09.945404   70517 out.go:358] Setting ErrFile to fd 2...
	I0910 18:52:09.945408   70517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:52:09.945590   70517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:52:09.945790   70517 out.go:352] Setting JSON to false
	I0910 18:52:09.945865   70517 mustload.go:65] Loading cluster: default-k8s-diff-port-557504
	I0910 18:52:09.946168   70517 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:52:09.946231   70517 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/config.json ...
	I0910 18:52:09.946398   70517 mustload.go:65] Loading cluster: default-k8s-diff-port-557504
	I0910 18:52:09.946493   70517 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:52:09.946519   70517 stop.go:39] StopHost: default-k8s-diff-port-557504
	I0910 18:52:09.946862   70517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:52:09.946900   70517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:52:09.961499   70517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I0910 18:52:09.961917   70517 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:52:09.962438   70517 main.go:141] libmachine: Using API Version  1
	I0910 18:52:09.962461   70517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:52:09.962778   70517 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:52:09.965032   70517 out.go:177] * Stopping node "default-k8s-diff-port-557504"  ...
	I0910 18:52:09.966145   70517 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0910 18:52:09.966177   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:52:09.966385   70517 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0910 18:52:09.966408   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:52:09.969008   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:52:09.969403   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:52:09.969427   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:52:09.969577   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:52:09.969704   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:52:09.969844   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:52:09.969957   70517 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:52:10.065617   70517 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0910 18:52:10.129287   70517 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0910 18:52:10.171404   70517 main.go:141] libmachine: Stopping "default-k8s-diff-port-557504"...
	I0910 18:52:10.171437   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:52:10.173182   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Stop
	I0910 18:52:10.176578   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 0/120
	I0910 18:52:11.178746   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 1/120
	I0910 18:52:12.180030   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 2/120
	I0910 18:52:13.181439   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 3/120
	I0910 18:52:14.182822   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 4/120
	I0910 18:52:15.184685   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 5/120
	I0910 18:52:16.186098   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 6/120
	I0910 18:52:17.187375   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 7/120
	I0910 18:52:18.188522   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 8/120
	I0910 18:52:19.189780   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 9/120
	I0910 18:52:20.192022   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 10/120
	I0910 18:52:21.193424   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 11/120
	I0910 18:52:22.194898   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 12/120
	I0910 18:52:23.196267   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 13/120
	I0910 18:52:24.197778   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 14/120
	I0910 18:52:25.199787   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 15/120
	I0910 18:52:26.201137   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 16/120
	I0910 18:52:27.202592   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 17/120
	I0910 18:52:28.203985   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 18/120
	I0910 18:52:29.205447   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 19/120
	I0910 18:52:30.207358   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 20/120
	I0910 18:52:31.208741   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 21/120
	I0910 18:52:32.210074   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 22/120
	I0910 18:52:33.211271   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 23/120
	I0910 18:52:34.212728   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 24/120
	I0910 18:52:35.214454   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 25/120
	I0910 18:52:36.215579   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 26/120
	I0910 18:52:37.216908   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 27/120
	I0910 18:52:38.218250   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 28/120
	I0910 18:52:39.219515   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 29/120
	I0910 18:52:40.221520   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 30/120
	I0910 18:52:41.222758   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 31/120
	I0910 18:52:42.224096   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 32/120
	I0910 18:52:43.225415   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 33/120
	I0910 18:52:44.226672   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 34/120
	I0910 18:52:45.228599   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 35/120
	I0910 18:52:46.230011   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 36/120
	I0910 18:52:47.231381   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 37/120
	I0910 18:52:48.232812   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 38/120
	I0910 18:52:49.234424   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 39/120
	I0910 18:52:50.236564   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 40/120
	I0910 18:52:51.238132   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 41/120
	I0910 18:52:52.239544   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 42/120
	I0910 18:52:53.241061   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 43/120
	I0910 18:52:54.242621   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 44/120
	I0910 18:52:55.244664   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 45/120
	I0910 18:52:56.245962   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 46/120
	I0910 18:52:57.247105   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 47/120
	I0910 18:52:58.248896   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 48/120
	I0910 18:52:59.250222   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 49/120
	I0910 18:53:00.252358   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 50/120
	I0910 18:53:01.253753   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 51/120
	I0910 18:53:02.254962   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 52/120
	I0910 18:53:03.256433   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 53/120
	I0910 18:53:04.257757   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 54/120
	I0910 18:53:05.259689   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 55/120
	I0910 18:53:06.261017   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 56/120
	I0910 18:53:07.262354   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 57/120
	I0910 18:53:08.263909   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 58/120
	I0910 18:53:09.265308   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 59/120
	I0910 18:53:10.267413   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 60/120
	I0910 18:53:11.268788   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 61/120
	I0910 18:53:12.270072   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 62/120
	I0910 18:53:13.271359   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 63/120
	I0910 18:53:14.272743   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 64/120
	I0910 18:53:15.274730   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 65/120
	I0910 18:53:16.275920   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 66/120
	I0910 18:53:17.277255   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 67/120
	I0910 18:53:18.278664   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 68/120
	I0910 18:53:19.280033   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 69/120
	I0910 18:53:20.282127   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 70/120
	I0910 18:53:21.283586   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 71/120
	I0910 18:53:22.284838   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 72/120
	I0910 18:53:23.286114   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 73/120
	I0910 18:53:24.287583   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 74/120
	I0910 18:53:25.289816   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 75/120
	I0910 18:53:26.291090   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 76/120
	I0910 18:53:27.292291   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 77/120
	I0910 18:53:28.293821   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 78/120
	I0910 18:53:29.295156   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 79/120
	I0910 18:53:30.297310   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 80/120
	I0910 18:53:31.298432   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 81/120
	I0910 18:53:32.299688   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 82/120
	I0910 18:53:33.300742   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 83/120
	I0910 18:53:34.302066   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 84/120
	I0910 18:53:35.303947   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 85/120
	I0910 18:53:36.305200   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 86/120
	I0910 18:53:37.306405   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 87/120
	I0910 18:53:38.308330   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 88/120
	I0910 18:53:39.309780   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 89/120
	I0910 18:53:40.311894   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 90/120
	I0910 18:53:41.313337   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 91/120
	I0910 18:53:42.314492   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 92/120
	I0910 18:53:43.315985   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 93/120
	I0910 18:53:44.317245   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 94/120
	I0910 18:53:45.319063   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 95/120
	I0910 18:53:46.320268   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 96/120
	I0910 18:53:47.321470   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 97/120
	I0910 18:53:48.322858   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 98/120
	I0910 18:53:49.324116   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 99/120
	I0910 18:53:50.326302   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 100/120
	I0910 18:53:51.327734   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 101/120
	I0910 18:53:52.328950   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 102/120
	I0910 18:53:53.330202   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 103/120
	I0910 18:53:54.331528   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 104/120
	I0910 18:53:55.333660   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 105/120
	I0910 18:53:56.334905   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 106/120
	I0910 18:53:57.336081   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 107/120
	I0910 18:53:58.337176   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 108/120
	I0910 18:53:59.338451   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 109/120
	I0910 18:54:00.340312   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 110/120
	I0910 18:54:01.341328   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 111/120
	I0910 18:54:02.342915   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 112/120
	I0910 18:54:03.344274   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 113/120
	I0910 18:54:04.345916   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 114/120
	I0910 18:54:05.347867   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 115/120
	I0910 18:54:06.349663   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 116/120
	I0910 18:54:07.351192   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 117/120
	I0910 18:54:08.352538   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 118/120
	I0910 18:54:09.353981   70517 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for machine to stop 119/120
	I0910 18:54:10.354528   70517 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0910 18:54:10.354588   70517 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0910 18:54:10.356416   70517 out.go:201] 
	W0910 18:54:10.357737   70517 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0910 18:54:10.357763   70517 out.go:270] * 
	* 
	W0910 18:54:10.361139   70517 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 18:54:10.362417   70517 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-557504 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504: exit status 3 (18.597712664s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:28.961437   71137 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	E0910 18:54:28.961458   71137 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-557504" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-836868 -n embed-certs-836868
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-836868 -n embed-certs-836868: exit status 3 (3.168166393s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:02.945496   70996 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	E0910 18:54:02.945518   70996 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-836868 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0910 18:54:03.561815   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-836868 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152944527s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-836868 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-836868 -n embed-certs-836868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-836868 -n embed-certs-836868: exit status 3 (3.063296744s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:12.161427   71107 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	E0910 18:54:12.161452   71107 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-836868" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-432422 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-432422 create -f testdata/busybox.yaml: exit status 1 (41.502748ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-432422" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-432422 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 6 (210.537649ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:13.183235   71265 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-432422" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 6 (212.790913ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:13.396677   71295 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-432422" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-432422 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0910 18:54:17.809129   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-432422 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.323890453s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-432422 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-432422 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-432422 describe deploy/metrics-server -n kube-system: exit status 1 (41.215939ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-432422" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-432422 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 6 (211.753724ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:55:59.973935   71988 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-432422" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-347802 -n no-preload-347802
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-347802 -n no-preload-347802: exit status 3 (3.167784856s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:22.913453   71371 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host
	E0910 18:54:22.913475   71371 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-347802 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0910 18:54:23.979350   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:23.985684   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:23.997031   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:24.018375   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:24.043757   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:24.060077   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:24.141512   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:24.303056   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:24.624861   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:25.266129   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:26.548382   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-347802 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153682658s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-347802 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-347802 -n no-preload-347802
E0910 18:54:29.109958   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-347802 -n no-preload-347802: exit status 3 (3.063004719s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:32.129429   71482 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host
	E0910 18:54:32.129456   71482 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-347802" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504: exit status 3 (3.168001232s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:32.129442   71452 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	E0910 18:54:32.129459   71452 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-557504 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-557504 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153429885s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-557504 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
E0910 18:54:38.788391   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:38.794712   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:38.806083   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:38.827449   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:38.868863   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:38.950294   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:39.111959   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:39.433709   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:40.075740   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504: exit status 3 (3.062260598s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0910 18:54:41.345448   71597 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	E0910 18:54:41.345472   71597 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-557504" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (723.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-432422 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0910 18:56:19.962699   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:56:26.927464   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:56:35.171718   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:57:00.924536   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:57:04.781741   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:57:07.838467   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:57:15.664173   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:57:22.648988   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:57:32.483326   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:57:43.367492   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:57:55.870607   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:58:22.846857   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:58:23.573160   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:58:43.066940   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:58:56.538263   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:59:10.768782   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:59:23.979300   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:59:38.788393   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:59:51.679976   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:00:06.490854   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:00:19.608807   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:00:38.986405   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:01:06.688155   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:01:35.171100   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:02:04.782315   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:02:15.663970   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:02:55.870380   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:03:43.066463   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:03:56.539001   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-432422 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m59.516353778s)

                                                
                                                
-- stdout --
	* [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-432422" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:56:02.487676   72122 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:56:02.487789   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487799   72122 out.go:358] Setting ErrFile to fd 2...
	I0910 18:56:02.487804   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487953   72122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:56:02.488491   72122 out.go:352] Setting JSON to false
	I0910 18:56:02.489572   72122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5914,"bootTime":1725988648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:56:02.489637   72122 start.go:139] virtualization: kvm guest
	I0910 18:56:02.491991   72122 out.go:177] * [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:56:02.493117   72122 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:56:02.493113   72122 notify.go:220] Checking for updates...
	I0910 18:56:02.494213   72122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:56:02.495356   72122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:56:02.496370   72122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:56:02.497440   72122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:56:02.498703   72122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:56:02.500450   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:56:02.501100   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.501150   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.515836   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0910 18:56:02.516286   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.516787   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.516815   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.517116   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.517300   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.519092   72122 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 18:56:02.520121   72122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:56:02.520405   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.520436   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.534860   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I0910 18:56:02.535243   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.535688   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.535711   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.536004   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.536215   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.570682   72122 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:56:02.571710   72122 start.go:297] selected driver: kvm2
	I0910 18:56:02.571722   72122 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.571821   72122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:56:02.572465   72122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.572528   72122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:56:02.587001   72122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:56:02.587381   72122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:56:02.587417   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:56:02.587427   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:56:02.587471   72122 start.go:340] cluster config:
	{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.587599   72122 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.589116   72122 out.go:177] * Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	I0910 18:56:02.590155   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:56:02.590185   72122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:56:02.590194   72122 cache.go:56] Caching tarball of preloaded images
	I0910 18:56:02.590294   72122 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:56:02.590313   72122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0910 18:56:02.590415   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:56:02.590612   72122 start.go:360] acquireMachinesLock for old-k8s-version-432422: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:59:30.202285   72122 start.go:364] duration metric: took 3m27.611616445s to acquireMachinesLock for "old-k8s-version-432422"
	I0910 18:59:30.202346   72122 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:30.202377   72122 fix.go:54] fixHost starting: 
	I0910 18:59:30.202807   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:30.202842   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:30.222440   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0910 18:59:30.222927   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:30.223415   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:59:30.223435   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:30.223748   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:30.223905   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:30.224034   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetState
	I0910 18:59:30.225464   72122 fix.go:112] recreateIfNeeded on old-k8s-version-432422: state=Stopped err=<nil>
	I0910 18:59:30.225505   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	W0910 18:59:30.225655   72122 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:30.227698   72122 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-432422" ...
	I0910 18:59:30.228896   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .Start
	I0910 18:59:30.229066   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring networks are active...
	I0910 18:59:30.229735   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network default is active
	I0910 18:59:30.230126   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network mk-old-k8s-version-432422 is active
	I0910 18:59:30.230559   72122 main.go:141] libmachine: (old-k8s-version-432422) Getting domain xml...
	I0910 18:59:30.231206   72122 main.go:141] libmachine: (old-k8s-version-432422) Creating domain...
	I0910 18:59:31.669616   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting to get IP...
	I0910 18:59:31.670682   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.671124   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.671225   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.671101   72995 retry.go:31] will retry after 285.109621ms: waiting for machine to come up
	I0910 18:59:31.957711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.958140   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.958169   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.958103   72995 retry.go:31] will retry after 306.703176ms: waiting for machine to come up
	I0910 18:59:32.266797   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.267299   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.267333   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.267226   72995 retry.go:31] will retry after 327.953362ms: waiting for machine to come up
	I0910 18:59:32.597017   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.597589   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.597616   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.597554   72995 retry.go:31] will retry after 448.654363ms: waiting for machine to come up
	I0910 18:59:33.048100   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.048559   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.048590   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.048478   72995 retry.go:31] will retry after 654.829574ms: waiting for machine to come up
	I0910 18:59:33.704902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.705446   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.705475   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.705363   72995 retry.go:31] will retry after 610.514078ms: waiting for machine to come up
	I0910 18:59:34.316978   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:34.317481   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:34.317503   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:34.317430   72995 retry.go:31] will retry after 1.125805817s: waiting for machine to come up
	I0910 18:59:35.444880   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:35.445369   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:35.445394   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:35.445312   72995 retry.go:31] will retry after 1.484426931s: waiting for machine to come up
	I0910 18:59:36.931028   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:36.931568   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:36.931613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:36.931524   72995 retry.go:31] will retry after 1.819998768s: waiting for machine to come up
	I0910 18:59:38.753463   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:38.754076   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:38.754107   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:38.754019   72995 retry.go:31] will retry after 2.258214375s: waiting for machine to come up
	I0910 18:59:41.013524   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:41.013988   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:41.014011   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:41.013910   72995 retry.go:31] will retry after 2.030553777s: waiting for machine to come up
	I0910 18:59:43.046937   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:43.047363   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:43.047393   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:43.047314   72995 retry.go:31] will retry after 2.233047134s: waiting for machine to come up
	I0910 18:59:45.282610   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:45.283104   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:45.283133   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:45.283026   72995 retry.go:31] will retry after 4.238676711s: waiting for machine to come up
	I0910 18:59:49.526000   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.526536   72122 main.go:141] libmachine: (old-k8s-version-432422) Found IP for machine: 192.168.61.51
	I0910 18:59:49.526558   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserving static IP address...
	I0910 18:59:49.526569   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has current primary IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.527018   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.527063   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | skip adding static IP to network mk-old-k8s-version-432422 - found existing host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"}
	I0910 18:59:49.527084   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserved static IP address: 192.168.61.51
	I0910 18:59:49.527099   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting for SSH to be available...
	I0910 18:59:49.527113   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Getting to WaitForSSH function...
	I0910 18:59:49.529544   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.529962   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.529987   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.530143   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH client type: external
	I0910 18:59:49.530170   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa (-rw-------)
	I0910 18:59:49.530195   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:49.530208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | About to run SSH command:
	I0910 18:59:49.530245   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | exit 0
	I0910 18:59:49.656944   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:49.657307   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:59:49.657926   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:49.660332   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660689   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.660711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660992   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:59:49.661238   72122 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:49.661259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:49.661480   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.663824   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.664236   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664370   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.664565   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664712   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664887   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.665103   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.665392   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.665406   72122 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:49.769433   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:49.769468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769716   72122 buildroot.go:166] provisioning hostname "old-k8s-version-432422"
	I0910 18:59:49.769740   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769918   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.772324   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772710   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.772736   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772875   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.773061   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773245   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773384   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.773554   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.773751   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.773764   72122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-432422 && echo "old-k8s-version-432422" | sudo tee /etc/hostname
	I0910 18:59:49.891230   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-432422
	
	I0910 18:59:49.891259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.894272   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894641   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.894683   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894820   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.894983   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895210   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.895330   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.895540   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.895559   72122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-432422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-432422/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-432422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:50.011767   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:50.011795   72122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:50.011843   72122 buildroot.go:174] setting up certificates
	I0910 18:59:50.011854   72122 provision.go:84] configureAuth start
	I0910 18:59:50.011866   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:50.012185   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:50.014947   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015352   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.015388   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015549   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.017712   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018002   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.018036   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018193   72122 provision.go:143] copyHostCerts
	I0910 18:59:50.018251   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:50.018265   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:50.018337   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:50.018481   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:50.018491   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:50.018513   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:50.018585   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:50.018594   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:50.018612   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:50.018667   72122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-432422 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-432422]
	I0910 18:59:50.528798   72122 provision.go:177] copyRemoteCerts
	I0910 18:59:50.528864   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:50.528900   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.532154   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532576   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.532613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532765   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.532995   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.533205   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.533370   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:50.620169   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0910 18:59:50.647163   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:50.679214   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:50.704333   72122 provision.go:87] duration metric: took 692.46607ms to configureAuth
	I0910 18:59:50.704360   72122 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:50.704545   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:59:50.704639   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.707529   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.707903   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.707931   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.708082   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.708297   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708463   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708641   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.708786   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:50.708954   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:50.708969   72122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:50.935375   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:50.935403   72122 machine.go:96] duration metric: took 1.274152353s to provisionDockerMachine
	I0910 18:59:50.935414   72122 start.go:293] postStartSetup for "old-k8s-version-432422" (driver="kvm2")
	I0910 18:59:50.935424   72122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:50.935448   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:50.935763   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:50.935796   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.938507   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.938865   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.938902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.939008   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.939198   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.939529   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.939689   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.024726   72122 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:51.029522   72122 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:51.029547   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:51.029632   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:51.029734   72122 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:51.029848   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:51.042454   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:51.068748   72122 start.go:296] duration metric: took 133.318275ms for postStartSetup
	I0910 18:59:51.068792   72122 fix.go:56] duration metric: took 20.866428313s for fixHost
	I0910 18:59:51.068816   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.071533   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.071894   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.071921   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.072072   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.072264   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072616   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.072784   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:51.072938   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:51.072948   72122 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:51.181996   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994791.151610055
	
	I0910 18:59:51.182016   72122 fix.go:216] guest clock: 1725994791.151610055
	I0910 18:59:51.182024   72122 fix.go:229] Guest: 2024-09-10 18:59:51.151610055 +0000 UTC Remote: 2024-09-10 18:59:51.068796263 +0000 UTC m=+228.614166738 (delta=82.813792ms)
	I0910 18:59:51.182048   72122 fix.go:200] guest clock delta is within tolerance: 82.813792ms
	I0910 18:59:51.182055   72122 start.go:83] releasing machines lock for "old-k8s-version-432422", held for 20.979733564s
	I0910 18:59:51.182094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.182331   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:51.184857   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185183   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.185212   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185346   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.185840   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186006   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186079   72122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:51.186143   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.186215   72122 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:51.186238   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.189304   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189674   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.189698   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189765   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189879   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190057   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190212   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190230   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.190255   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.190358   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.190470   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190652   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190817   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190948   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.296968   72122 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:51.303144   72122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:51.447027   72122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:51.454963   72122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:51.455032   72122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:51.474857   72122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:51.474882   72122 start.go:495] detecting cgroup driver to use...
	I0910 18:59:51.474957   72122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:51.490457   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:51.504502   72122 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:51.504569   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:51.523331   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:51.543438   72122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:51.678734   72122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:51.831736   72122 docker.go:233] disabling docker service ...
	I0910 18:59:51.831804   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:51.846805   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:51.865771   72122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:52.012922   72122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:52.161595   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:52.180034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:52.200984   72122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0910 18:59:52.201041   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.211927   72122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:52.211989   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.223601   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.234211   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.246209   72122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:52.264079   72122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:52.277144   72122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:52.277204   72122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:52.292683   72122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:52.304601   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:52.421971   72122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:52.544386   72122 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:52.544459   72122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:52.551436   72122 start.go:563] Will wait 60s for crictl version
	I0910 18:59:52.551487   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:52.555614   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:52.598031   72122 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:52.598128   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.629578   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.662403   72122 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0910 18:59:52.663465   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:52.666401   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.666796   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:52.666843   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.667002   72122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:52.672338   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:52.688427   72122 kubeadm.go:883] updating cluster {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:52.688559   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:59:52.688623   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:52.740370   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:52.740447   72122 ssh_runner.go:195] Run: which lz4
	I0910 18:59:52.744925   72122 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:52.749840   72122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:52.749872   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0910 18:59:54.437031   72122 crio.go:462] duration metric: took 1.692132914s to copy over tarball
	I0910 18:59:54.437124   72122 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:57.462705   72122 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025545297s)
	I0910 18:59:57.462743   72122 crio.go:469] duration metric: took 3.025690485s to extract the tarball
	I0910 18:59:57.462753   72122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:57.508817   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:57.551327   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:57.551350   72122 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:57.551434   72122 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.551704   72122 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.551776   72122 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.552000   72122 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.551807   72122 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.551846   72122 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.551714   72122 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0910 18:59:57.551917   72122 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.553642   72122 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.553660   72122 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.553917   72122 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.553935   72122 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 18:59:57.554014   72122 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.554160   72122 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.554376   72122 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.554662   72122 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.726191   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.742799   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.745264   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.753214   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.768122   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.770828   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 18:59:57.774835   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.807657   72122 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0910 18:59:57.807693   72122 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.807733   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908662   72122 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0910 18:59:57.908678   72122 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0910 18:59:57.908707   72122 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.908711   72122 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.908759   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908760   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920214   72122 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0910 18:59:57.920248   72122 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0910 18:59:57.920258   72122 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.920280   72122 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.920304   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920313   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.937914   72122 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0910 18:59:57.937952   72122 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.937958   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.937999   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.938033   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.938006   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.938073   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.938063   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.938157   72122 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0910 18:59:57.938185   72122 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0910 18:59:57.938215   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:58.044082   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.044139   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.044146   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.044173   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.045813   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.045816   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.045849   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.198804   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.198841   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.198881   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.198944   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.198978   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.199000   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.199081   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.353153   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0910 18:59:58.353217   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0910 18:59:58.353232   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0910 18:59:58.353277   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0910 18:59:58.359353   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.359363   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0910 18:59:58.359421   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.386872   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:58.407734   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0910 18:59:58.425479   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0910 18:59:58.553340   72122 cache_images.go:92] duration metric: took 1.001972084s to LoadCachedImages
	W0910 18:59:58.553438   72122 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0910 18:59:58.553455   72122 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0910 18:59:58.553634   72122 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-432422 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:58.553722   72122 ssh_runner.go:195] Run: crio config
	I0910 18:59:58.605518   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:59:58.605542   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:58.605554   72122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:58.605577   72122 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-432422 NodeName:old-k8s-version-432422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 18:59:58.605744   72122 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-432422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:58.605814   72122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0910 18:59:58.618033   72122 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:58.618096   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:58.629175   72122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0910 18:59:58.653830   72122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:58.679797   72122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0910 18:59:58.698692   72122 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:58.702565   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:58.715128   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:58.858262   72122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:58.876681   72122 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422 for IP: 192.168.61.51
	I0910 18:59:58.876719   72122 certs.go:194] generating shared ca certs ...
	I0910 18:59:58.876740   72122 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:58.876921   72122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:58.876983   72122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:58.876996   72122 certs.go:256] generating profile certs ...
	I0910 18:59:58.877129   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key
	I0910 18:59:58.877210   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b
	I0910 18:59:58.877264   72122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key
	I0910 18:59:58.877424   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:58.877473   72122 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:58.877491   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:58.877528   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:58.877560   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:58.877591   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:58.877648   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:58.878410   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:58.936013   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:58.969736   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:59.017414   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:59.063599   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 18:59:59.093934   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:59.138026   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:59.166507   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:59.196972   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:59.223596   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:59.250627   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:59.279886   72122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:59.300491   72122 ssh_runner.go:195] Run: openssl version
	I0910 18:59:59.306521   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:59.317238   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321625   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321682   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.327532   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:59.339028   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:59.350578   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355025   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355106   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.360701   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:59.375040   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:59.389867   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395829   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395890   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.402425   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:59.414077   72122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:59.418909   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:59.425061   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:59.431213   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:59.437581   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:59.443603   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:59.449820   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:59.456100   72122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:59.456189   72122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:59.456234   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.497167   72122 cri.go:89] found id: ""
	I0910 18:59:59.497227   72122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:59.508449   72122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:59.508474   72122 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:59.508527   72122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:59.521416   72122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:59.522489   72122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:59.523125   72122 kubeconfig.go:62] /home/jenkins/minikube-integration/19598-5973/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-432422" cluster setting kubeconfig missing "old-k8s-version-432422" context setting]
	I0910 18:59:59.524107   72122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:59.637793   72122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:59.651879   72122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0910 18:59:59.651916   72122 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:59.651930   72122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:59.651989   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.691857   72122 cri.go:89] found id: ""
	I0910 18:59:59.691922   72122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:59.708610   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:59.718680   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:59.718702   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:59.718755   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:59.729965   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:59.730028   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:59.740037   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:59.750640   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:59.750706   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:59.762436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.773456   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:59.773522   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.783438   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:59.792996   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:59.793056   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:59.805000   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:59.815384   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:59.955068   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:00.842403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.102530   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.212897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.340128   72122 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:01.340217   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:01.841004   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:02.340913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:02.840935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.340938   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.840669   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.341213   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.841274   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.340698   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.841152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.340425   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.841001   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.341198   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.840772   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.341153   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.840737   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.340471   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.840262   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.340827   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.840645   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.340524   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.840521   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.340560   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.841060   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.340347   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.841136   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.840913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.341205   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.840692   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.340839   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.841050   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.341340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.840510   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.340821   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.841156   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.340316   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.840339   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.341140   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.841333   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.340342   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.840282   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:22.340361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:22.841048   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.341180   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.841325   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.340485   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.841340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.340935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.840886   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.340826   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.840344   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.341189   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.840306   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.340657   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.841179   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.340881   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.840957   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.341260   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.841151   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.840360   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.341199   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.841192   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.340518   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.840995   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.341016   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.840480   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.340647   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.840416   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.340921   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.340956   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.841210   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.341302   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.340558   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.840395   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.341022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.841093   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.341228   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.841103   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.340329   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.841000   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.341147   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.840534   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.340988   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.340859   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.840877   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.841175   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:47.341064   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:47.841037   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.341204   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.840961   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.340679   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.841173   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.340751   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.841158   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.340999   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.840349   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.340383   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.840991   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.340439   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.840487   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.340407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.840619   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.340844   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.841190   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.340927   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.840798   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.340905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.841330   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.340743   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.840256   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.340970   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.840732   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:01.340927   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:01.341014   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:01.378922   72122 cri.go:89] found id: ""
	I0910 19:01:01.378953   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.378964   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:01.378971   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:01.379032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:01.413274   72122 cri.go:89] found id: ""
	I0910 19:01:01.413302   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.413313   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:01.413320   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:01.413383   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:01.449165   72122 cri.go:89] found id: ""
	I0910 19:01:01.449204   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.449215   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:01.449221   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:01.449291   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:01.484627   72122 cri.go:89] found id: ""
	I0910 19:01:01.484650   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.484657   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:01.484663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:01.484720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:01.519332   72122 cri.go:89] found id: ""
	I0910 19:01:01.519357   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.519364   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:01.519370   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:01.519424   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:01.554080   72122 cri.go:89] found id: ""
	I0910 19:01:01.554102   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.554109   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:01.554114   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:01.554160   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:01.590100   72122 cri.go:89] found id: ""
	I0910 19:01:01.590131   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.590143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:01.590149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:01.590208   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:01.623007   72122 cri.go:89] found id: ""
	I0910 19:01:01.623034   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.623045   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:01.623055   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:01.623070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:01.679940   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:01.679971   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:01.694183   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:01.694218   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:01.826997   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:01.827025   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:01.827038   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:01.903885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:01.903926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:04.450792   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:04.471427   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:04.471501   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:04.521450   72122 cri.go:89] found id: ""
	I0910 19:01:04.521484   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.521494   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:04.521503   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:04.521562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:04.577588   72122 cri.go:89] found id: ""
	I0910 19:01:04.577622   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.577633   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:04.577641   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:04.577707   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:04.615558   72122 cri.go:89] found id: ""
	I0910 19:01:04.615586   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.615594   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:04.615599   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:04.615652   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:04.655763   72122 cri.go:89] found id: ""
	I0910 19:01:04.655793   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.655806   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:04.655815   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:04.655881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:04.692620   72122 cri.go:89] found id: ""
	I0910 19:01:04.692642   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.692649   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:04.692654   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:04.692709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:04.730575   72122 cri.go:89] found id: ""
	I0910 19:01:04.730601   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.730611   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:04.730616   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:04.730665   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:04.766716   72122 cri.go:89] found id: ""
	I0910 19:01:04.766742   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.766749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:04.766754   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:04.766799   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:04.808122   72122 cri.go:89] found id: ""
	I0910 19:01:04.808151   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.808162   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:04.808173   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:04.808185   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:04.858563   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:04.858592   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:04.872323   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:04.872350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:04.942541   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:04.942571   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:04.942588   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:05.022303   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:05.022338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:07.562092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:07.575254   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:07.575308   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:07.616583   72122 cri.go:89] found id: ""
	I0910 19:01:07.616607   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.616615   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:07.616620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:07.616676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:07.654676   72122 cri.go:89] found id: ""
	I0910 19:01:07.654700   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.654711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:07.654718   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:07.654790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:07.690054   72122 cri.go:89] found id: ""
	I0910 19:01:07.690085   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.690096   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:07.690104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:07.690171   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:07.724273   72122 cri.go:89] found id: ""
	I0910 19:01:07.724295   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.724302   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:07.724307   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:07.724363   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:07.757621   72122 cri.go:89] found id: ""
	I0910 19:01:07.757646   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.757654   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:07.757660   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:07.757716   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:07.791502   72122 cri.go:89] found id: ""
	I0910 19:01:07.791533   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.791543   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:07.791557   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:07.791620   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:07.825542   72122 cri.go:89] found id: ""
	I0910 19:01:07.825577   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.825586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:07.825592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:07.825649   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:07.862278   72122 cri.go:89] found id: ""
	I0910 19:01:07.862303   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.862312   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:07.862320   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:07.862331   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:07.952016   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:07.952059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:07.997004   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:07.997034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:08.047745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:08.047783   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:08.064712   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:08.064736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:08.136822   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:10.637017   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:10.650113   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:10.650198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:10.687477   72122 cri.go:89] found id: ""
	I0910 19:01:10.687504   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.687513   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:10.687520   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:10.687594   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:10.721410   72122 cri.go:89] found id: ""
	I0910 19:01:10.721437   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.721447   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:10.721455   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:10.721514   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:10.757303   72122 cri.go:89] found id: ""
	I0910 19:01:10.757330   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.757338   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:10.757343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:10.757396   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:10.794761   72122 cri.go:89] found id: ""
	I0910 19:01:10.794788   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.794799   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:10.794806   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:10.794885   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:10.828631   72122 cri.go:89] found id: ""
	I0910 19:01:10.828657   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.828668   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:10.828675   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:10.828737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:10.863609   72122 cri.go:89] found id: ""
	I0910 19:01:10.863634   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.863641   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:10.863646   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:10.863734   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:10.899299   72122 cri.go:89] found id: ""
	I0910 19:01:10.899324   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.899335   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:10.899342   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:10.899403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:10.939233   72122 cri.go:89] found id: ""
	I0910 19:01:10.939259   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.939268   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:10.939277   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:10.939290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:10.976599   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:10.976627   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:11.029099   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:11.029144   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:11.045401   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:11.045426   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:11.119658   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:11.119679   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:11.119696   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:13.698696   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:13.712317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:13.712386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:13.747442   72122 cri.go:89] found id: ""
	I0910 19:01:13.747470   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.747480   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:13.747487   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:13.747555   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:13.782984   72122 cri.go:89] found id: ""
	I0910 19:01:13.783008   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.783015   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:13.783021   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:13.783078   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:13.820221   72122 cri.go:89] found id: ""
	I0910 19:01:13.820245   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.820256   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:13.820262   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:13.820322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:13.854021   72122 cri.go:89] found id: ""
	I0910 19:01:13.854056   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.854068   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:13.854075   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:13.854138   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:13.888292   72122 cri.go:89] found id: ""
	I0910 19:01:13.888321   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.888331   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:13.888338   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:13.888398   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:13.922301   72122 cri.go:89] found id: ""
	I0910 19:01:13.922330   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.922341   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:13.922349   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:13.922408   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:13.959977   72122 cri.go:89] found id: ""
	I0910 19:01:13.960002   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.960010   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:13.960015   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:13.960074   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:13.995255   72122 cri.go:89] found id: ""
	I0910 19:01:13.995282   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.995293   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:13.995308   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:13.995323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:14.050760   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:14.050790   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:14.064694   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:14.064723   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:14.137406   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:14.137431   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:14.137447   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:14.216624   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:14.216657   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:16.765643   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:16.778746   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:16.778821   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:16.814967   72122 cri.go:89] found id: ""
	I0910 19:01:16.814999   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.815010   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:16.815017   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:16.815073   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:16.850306   72122 cri.go:89] found id: ""
	I0910 19:01:16.850334   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.850345   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:16.850352   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:16.850413   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:16.886104   72122 cri.go:89] found id: ""
	I0910 19:01:16.886134   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.886144   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:16.886152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:16.886218   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:16.921940   72122 cri.go:89] found id: ""
	I0910 19:01:16.921968   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.921977   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:16.921983   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:16.922032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:16.956132   72122 cri.go:89] found id: ""
	I0910 19:01:16.956166   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.956177   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:16.956185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:16.956247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:16.988240   72122 cri.go:89] found id: ""
	I0910 19:01:16.988269   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.988278   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:16.988284   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:16.988330   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:17.022252   72122 cri.go:89] found id: ""
	I0910 19:01:17.022281   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.022291   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:17.022297   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:17.022364   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:17.058664   72122 cri.go:89] found id: ""
	I0910 19:01:17.058693   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.058703   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:17.058715   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:17.058740   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:17.136927   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:17.136964   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:17.189427   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:17.189457   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:17.242193   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:17.242225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:17.257878   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:17.257908   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:17.330096   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:19.831030   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:19.844516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:19.844581   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:19.879878   72122 cri.go:89] found id: ""
	I0910 19:01:19.879908   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.879919   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:19.879927   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:19.879988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:19.915992   72122 cri.go:89] found id: ""
	I0910 19:01:19.916018   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.916025   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:19.916030   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:19.916084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:19.949206   72122 cri.go:89] found id: ""
	I0910 19:01:19.949232   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.949242   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:19.949249   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:19.949311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:19.983011   72122 cri.go:89] found id: ""
	I0910 19:01:19.983035   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.983043   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:19.983048   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:19.983096   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:20.018372   72122 cri.go:89] found id: ""
	I0910 19:01:20.018394   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.018402   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:20.018408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:20.018466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:20.053941   72122 cri.go:89] found id: ""
	I0910 19:01:20.053967   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.053975   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:20.053980   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:20.054037   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:20.084999   72122 cri.go:89] found id: ""
	I0910 19:01:20.085026   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.085035   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:20.085042   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:20.085115   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:20.124036   72122 cri.go:89] found id: ""
	I0910 19:01:20.124063   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.124072   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:20.124086   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:20.124103   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:20.176917   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:20.176944   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:20.190831   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:20.190852   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:20.257921   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:20.257942   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:20.257954   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:20.335320   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:20.335350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:22.875167   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:22.888803   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:22.888858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:22.922224   72122 cri.go:89] found id: ""
	I0910 19:01:22.922252   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.922264   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:22.922270   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:22.922328   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:22.959502   72122 cri.go:89] found id: ""
	I0910 19:01:22.959536   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.959546   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:22.959553   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:22.959619   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:22.992914   72122 cri.go:89] found id: ""
	I0910 19:01:22.992944   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.992955   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:22.992962   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:22.993022   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:23.028342   72122 cri.go:89] found id: ""
	I0910 19:01:23.028367   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.028376   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:23.028384   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:23.028443   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:23.064715   72122 cri.go:89] found id: ""
	I0910 19:01:23.064742   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.064753   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:23.064761   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:23.064819   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:23.100752   72122 cri.go:89] found id: ""
	I0910 19:01:23.100781   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.100789   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:23.100795   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:23.100857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:23.136017   72122 cri.go:89] found id: ""
	I0910 19:01:23.136045   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.136055   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:23.136062   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:23.136108   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:23.170787   72122 cri.go:89] found id: ""
	I0910 19:01:23.170811   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.170819   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:23.170826   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:23.170840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:23.210031   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:23.210059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:23.261525   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:23.261557   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:23.275611   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:23.275636   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:23.348543   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:23.348568   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:23.348582   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:25.929406   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:25.942658   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:25.942737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:25.977231   72122 cri.go:89] found id: ""
	I0910 19:01:25.977260   72122 logs.go:276] 0 containers: []
	W0910 19:01:25.977270   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:25.977277   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:25.977336   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:26.015060   72122 cri.go:89] found id: ""
	I0910 19:01:26.015093   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.015103   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:26.015110   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:26.015180   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:26.053618   72122 cri.go:89] found id: ""
	I0910 19:01:26.053643   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.053651   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:26.053656   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:26.053712   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:26.090489   72122 cri.go:89] found id: ""
	I0910 19:01:26.090515   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.090523   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:26.090529   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:26.090587   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:26.126687   72122 cri.go:89] found id: ""
	I0910 19:01:26.126710   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.126718   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:26.126723   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:26.126771   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:26.160901   72122 cri.go:89] found id: ""
	I0910 19:01:26.160939   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.160951   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:26.160959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:26.161017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:26.195703   72122 cri.go:89] found id: ""
	I0910 19:01:26.195728   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.195737   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:26.195743   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:26.195794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:26.230394   72122 cri.go:89] found id: ""
	I0910 19:01:26.230414   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.230422   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:26.230430   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:26.230444   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:26.296884   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:26.296905   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:26.296926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:26.371536   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:26.371569   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:26.412926   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:26.412958   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:26.462521   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:26.462550   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:28.976550   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:28.989517   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:28.989586   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:29.025638   72122 cri.go:89] found id: ""
	I0910 19:01:29.025662   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.025671   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:29.025677   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:29.025719   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:29.067473   72122 cri.go:89] found id: ""
	I0910 19:01:29.067495   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.067502   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:29.067507   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:29.067556   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:29.105587   72122 cri.go:89] found id: ""
	I0910 19:01:29.105616   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.105628   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:29.105635   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:29.105696   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:29.142427   72122 cri.go:89] found id: ""
	I0910 19:01:29.142458   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.142468   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:29.142474   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:29.142530   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:29.178553   72122 cri.go:89] found id: ""
	I0910 19:01:29.178575   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.178582   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:29.178587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:29.178638   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:29.212997   72122 cri.go:89] found id: ""
	I0910 19:01:29.213025   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.213034   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:29.213040   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:29.213109   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:29.247057   72122 cri.go:89] found id: ""
	I0910 19:01:29.247083   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.247091   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:29.247097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:29.247151   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:29.285042   72122 cri.go:89] found id: ""
	I0910 19:01:29.285084   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.285096   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:29.285107   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:29.285131   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:29.336003   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:29.336033   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:29.349867   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:29.349890   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:29.422006   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:29.422028   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:29.422043   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:29.504047   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:29.504079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:32.050723   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:32.063851   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:32.063904   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:32.100816   72122 cri.go:89] found id: ""
	I0910 19:01:32.100841   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.100851   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:32.100858   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:32.100924   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:32.134863   72122 cri.go:89] found id: ""
	I0910 19:01:32.134892   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.134902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:32.134909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:32.134967   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:32.169873   72122 cri.go:89] found id: ""
	I0910 19:01:32.169901   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.169912   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:32.169919   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:32.169973   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:32.202161   72122 cri.go:89] found id: ""
	I0910 19:01:32.202187   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.202197   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:32.202204   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:32.202264   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:32.236850   72122 cri.go:89] found id: ""
	I0910 19:01:32.236879   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.236888   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:32.236896   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:32.236957   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:32.271479   72122 cri.go:89] found id: ""
	I0910 19:01:32.271511   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.271530   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:32.271542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:32.271701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:32.306724   72122 cri.go:89] found id: ""
	I0910 19:01:32.306747   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.306754   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:32.306760   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:32.306811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:32.341153   72122 cri.go:89] found id: ""
	I0910 19:01:32.341184   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.341195   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:32.341206   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:32.341221   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:32.393087   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:32.393122   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:32.406565   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:32.406591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:32.478030   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:32.478048   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:32.478079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:32.568440   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:32.568478   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:35.112022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:35.125210   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:35.125286   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:35.160716   72122 cri.go:89] found id: ""
	I0910 19:01:35.160743   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.160753   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:35.160759   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:35.160817   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:35.196500   72122 cri.go:89] found id: ""
	I0910 19:01:35.196530   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.196541   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:35.196548   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:35.196622   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:35.232476   72122 cri.go:89] found id: ""
	I0910 19:01:35.232510   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.232521   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:35.232528   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:35.232590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:35.269612   72122 cri.go:89] found id: ""
	I0910 19:01:35.269635   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.269644   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:35.269649   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:35.269697   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:35.307368   72122 cri.go:89] found id: ""
	I0910 19:01:35.307393   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.307401   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:35.307408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:35.307475   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:35.342079   72122 cri.go:89] found id: ""
	I0910 19:01:35.342108   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.342119   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:35.342126   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:35.342188   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:35.379732   72122 cri.go:89] found id: ""
	I0910 19:01:35.379761   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.379771   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:35.379778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:35.379840   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:35.419067   72122 cri.go:89] found id: ""
	I0910 19:01:35.419098   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.419109   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:35.419120   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:35.419139   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:35.472459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:35.472494   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:35.487044   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:35.487078   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:35.565242   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:35.565264   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:35.565282   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:35.645918   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:35.645951   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:38.189238   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:38.203973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:38.204035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:38.241402   72122 cri.go:89] found id: ""
	I0910 19:01:38.241428   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.241438   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:38.241446   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:38.241506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:38.280657   72122 cri.go:89] found id: ""
	I0910 19:01:38.280685   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.280693   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:38.280698   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:38.280753   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:38.319697   72122 cri.go:89] found id: ""
	I0910 19:01:38.319725   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.319735   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:38.319742   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:38.319804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:38.356766   72122 cri.go:89] found id: ""
	I0910 19:01:38.356799   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.356810   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:38.356817   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:38.356876   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:38.395468   72122 cri.go:89] found id: ""
	I0910 19:01:38.395497   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.395508   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:38.395516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:38.395577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:38.434942   72122 cri.go:89] found id: ""
	I0910 19:01:38.434965   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.434974   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:38.434979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:38.435025   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:38.470687   72122 cri.go:89] found id: ""
	I0910 19:01:38.470715   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.470724   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:38.470729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:38.470777   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:38.505363   72122 cri.go:89] found id: ""
	I0910 19:01:38.505394   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.505405   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:38.505417   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:38.505432   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:38.557735   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:38.557770   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:38.586094   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:38.586128   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:38.665190   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:38.665215   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:38.665231   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:38.743748   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:38.743779   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:41.284310   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:41.299086   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:41.299157   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:41.340453   72122 cri.go:89] found id: ""
	I0910 19:01:41.340476   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.340484   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:41.340489   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:41.340544   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:41.374028   72122 cri.go:89] found id: ""
	I0910 19:01:41.374052   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.374060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:41.374066   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:41.374117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:41.413888   72122 cri.go:89] found id: ""
	I0910 19:01:41.413915   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.413929   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:41.413935   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:41.413994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:41.450846   72122 cri.go:89] found id: ""
	I0910 19:01:41.450873   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.450883   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:41.450890   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:41.450950   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:41.484080   72122 cri.go:89] found id: ""
	I0910 19:01:41.484107   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.484115   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:41.484120   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:41.484168   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:41.523652   72122 cri.go:89] found id: ""
	I0910 19:01:41.523677   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.523685   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:41.523690   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:41.523749   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:41.563690   72122 cri.go:89] found id: ""
	I0910 19:01:41.563715   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.563727   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:41.563734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:41.563797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:41.602101   72122 cri.go:89] found id: ""
	I0910 19:01:41.602122   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.602130   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:41.602137   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:41.602152   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:41.655459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:41.655488   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:41.670037   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:41.670062   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:41.741399   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:41.741417   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:41.741428   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:41.817411   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:41.817445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:44.363631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:44.378279   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:44.378344   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:44.412450   72122 cri.go:89] found id: ""
	I0910 19:01:44.412486   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.412495   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:44.412502   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:44.412569   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:44.448378   72122 cri.go:89] found id: ""
	I0910 19:01:44.448407   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.448415   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:44.448420   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:44.448470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:44.483478   72122 cri.go:89] found id: ""
	I0910 19:01:44.483516   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.483524   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:44.483530   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:44.483584   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:44.517787   72122 cri.go:89] found id: ""
	I0910 19:01:44.517812   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.517822   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:44.517828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:44.517886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:44.554909   72122 cri.go:89] found id: ""
	I0910 19:01:44.554939   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.554950   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:44.554957   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:44.555018   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:44.589865   72122 cri.go:89] found id: ""
	I0910 19:01:44.589890   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.589909   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:44.589923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:44.589968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:44.626712   72122 cri.go:89] found id: ""
	I0910 19:01:44.626739   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.626749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:44.626756   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:44.626815   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:44.664985   72122 cri.go:89] found id: ""
	I0910 19:01:44.665067   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.665103   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:44.665114   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:44.665165   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:44.721160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:44.721196   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:44.735339   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:44.735366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:44.810056   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:44.810080   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:44.810094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:44.898822   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:44.898871   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:47.438440   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:47.451438   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:47.451506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:47.491703   72122 cri.go:89] found id: ""
	I0910 19:01:47.491729   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.491740   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:47.491747   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:47.491811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:47.526834   72122 cri.go:89] found id: ""
	I0910 19:01:47.526862   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.526874   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:47.526880   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:47.526940   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:47.570463   72122 cri.go:89] found id: ""
	I0910 19:01:47.570488   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.570496   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:47.570503   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:47.570545   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:47.608691   72122 cri.go:89] found id: ""
	I0910 19:01:47.608715   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.608727   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:47.608734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:47.608780   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:47.648279   72122 cri.go:89] found id: ""
	I0910 19:01:47.648308   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.648316   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:47.648324   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:47.648386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:47.684861   72122 cri.go:89] found id: ""
	I0910 19:01:47.684885   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.684892   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:47.684897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:47.684947   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:47.721004   72122 cri.go:89] found id: ""
	I0910 19:01:47.721037   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.721049   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:47.721056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:47.721134   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:47.756154   72122 cri.go:89] found id: ""
	I0910 19:01:47.756181   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.756192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:47.756202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:47.756217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:47.806860   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:47.806889   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:47.822419   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:47.822445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:47.891966   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:47.891986   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:47.892000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:47.978510   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:47.978561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.519264   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:50.533576   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:50.533630   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:50.567574   72122 cri.go:89] found id: ""
	I0910 19:01:50.567601   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.567612   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:50.567619   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:50.567678   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:50.608824   72122 cri.go:89] found id: ""
	I0910 19:01:50.608850   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.608858   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:50.608863   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:50.608939   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:50.644502   72122 cri.go:89] found id: ""
	I0910 19:01:50.644530   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.644538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:50.644544   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:50.644590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:50.682309   72122 cri.go:89] found id: ""
	I0910 19:01:50.682332   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.682340   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:50.682345   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:50.682404   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:50.735372   72122 cri.go:89] found id: ""
	I0910 19:01:50.735398   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.735410   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:50.735418   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:50.735482   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:50.786364   72122 cri.go:89] found id: ""
	I0910 19:01:50.786391   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.786401   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:50.786408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:50.786464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:50.831525   72122 cri.go:89] found id: ""
	I0910 19:01:50.831564   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.831575   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:50.831582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:50.831645   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:50.873457   72122 cri.go:89] found id: ""
	I0910 19:01:50.873482   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.873493   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:50.873503   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:50.873524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:50.956032   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:50.956069   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.996871   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:50.996904   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:51.047799   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:51.047824   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:51.061946   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:51.061970   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:51.136302   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:53.636464   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:53.649971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:53.650054   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:53.688172   72122 cri.go:89] found id: ""
	I0910 19:01:53.688201   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.688211   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:53.688217   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:53.688274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:53.725094   72122 cri.go:89] found id: ""
	I0910 19:01:53.725119   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.725128   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:53.725135   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:53.725196   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:53.763866   72122 cri.go:89] found id: ""
	I0910 19:01:53.763893   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.763907   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:53.763914   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:53.763971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:53.797760   72122 cri.go:89] found id: ""
	I0910 19:01:53.797787   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.797798   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:53.797805   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:53.797862   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:53.830305   72122 cri.go:89] found id: ""
	I0910 19:01:53.830332   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.830340   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:53.830346   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:53.830402   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:53.861970   72122 cri.go:89] found id: ""
	I0910 19:01:53.861995   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.862003   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:53.862009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:53.862059   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:53.896577   72122 cri.go:89] found id: ""
	I0910 19:01:53.896600   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.896609   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:53.896614   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:53.896660   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:53.935051   72122 cri.go:89] found id: ""
	I0910 19:01:53.935077   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.935086   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:53.935094   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:53.935105   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:53.950252   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:53.950276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:54.023327   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:54.023346   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:54.023361   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:54.101605   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:54.101643   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:54.142906   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:54.142930   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:56.697701   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:56.717755   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:56.717836   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:56.763564   72122 cri.go:89] found id: ""
	I0910 19:01:56.763594   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.763606   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:56.763613   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:56.763675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:56.815780   72122 cri.go:89] found id: ""
	I0910 19:01:56.815808   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.815816   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:56.815821   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:56.815883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:56.848983   72122 cri.go:89] found id: ""
	I0910 19:01:56.849013   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.849024   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:56.849032   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:56.849100   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:56.880660   72122 cri.go:89] found id: ""
	I0910 19:01:56.880690   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.880702   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:56.880709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:56.880756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:56.922836   72122 cri.go:89] found id: ""
	I0910 19:01:56.922860   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.922867   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:56.922873   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:56.922938   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:56.963474   72122 cri.go:89] found id: ""
	I0910 19:01:56.963505   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.963517   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:56.963524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:56.963585   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:56.996837   72122 cri.go:89] found id: ""
	I0910 19:01:56.996864   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.996872   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:56.996877   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:56.996925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:57.029594   72122 cri.go:89] found id: ""
	I0910 19:01:57.029629   72122 logs.go:276] 0 containers: []
	W0910 19:01:57.029640   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:57.029651   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:57.029664   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:57.083745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:57.083772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:57.099269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:57.099293   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:57.174098   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:57.174118   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:57.174129   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:57.258833   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:57.258869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:59.800644   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:59.814728   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:59.814805   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:59.854081   72122 cri.go:89] found id: ""
	I0910 19:01:59.854113   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.854124   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:59.854133   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:59.854197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:59.889524   72122 cri.go:89] found id: ""
	I0910 19:01:59.889550   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.889560   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:59.889567   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:59.889626   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:59.925833   72122 cri.go:89] found id: ""
	I0910 19:01:59.925859   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.925866   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:59.925872   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:59.925935   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:59.962538   72122 cri.go:89] found id: ""
	I0910 19:01:59.962575   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.962586   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:59.962593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:59.962650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:59.996994   72122 cri.go:89] found id: ""
	I0910 19:01:59.997025   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.997037   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:59.997045   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:59.997126   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:00.032881   72122 cri.go:89] found id: ""
	I0910 19:02:00.032905   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.032915   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:00.032923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:00.032988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:00.065838   72122 cri.go:89] found id: ""
	I0910 19:02:00.065861   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.065869   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:00.065874   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:00.065927   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:00.099479   72122 cri.go:89] found id: ""
	I0910 19:02:00.099505   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.099516   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:00.099526   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:00.099540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:00.182661   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:00.182689   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:00.223514   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:00.223553   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:00.273695   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:00.273721   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:00.287207   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:00.287233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:00.353975   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:02.854145   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:02.867413   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:02.867484   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:02.904299   72122 cri.go:89] found id: ""
	I0910 19:02:02.904327   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.904335   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:02.904340   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:02.904392   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:02.940981   72122 cri.go:89] found id: ""
	I0910 19:02:02.941010   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.941019   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:02.941024   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:02.941099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:02.980013   72122 cri.go:89] found id: ""
	I0910 19:02:02.980038   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.980046   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:02.980052   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:02.980111   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:03.020041   72122 cri.go:89] found id: ""
	I0910 19:02:03.020071   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.020080   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:03.020087   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:03.020144   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:03.055228   72122 cri.go:89] found id: ""
	I0910 19:02:03.055264   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.055277   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:03.055285   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:03.055347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:03.088696   72122 cri.go:89] found id: ""
	I0910 19:02:03.088722   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.088730   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:03.088736   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:03.088787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:03.124753   72122 cri.go:89] found id: ""
	I0910 19:02:03.124776   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.124785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:03.124792   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:03.124849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:03.157191   72122 cri.go:89] found id: ""
	I0910 19:02:03.157222   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.157230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:03.157238   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:03.157248   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:03.239015   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:03.239044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:03.279323   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:03.279355   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:03.328034   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:03.328067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:03.341591   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:03.341620   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:03.411057   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:05.911503   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:05.924794   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:05.924868   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:05.958827   72122 cri.go:89] found id: ""
	I0910 19:02:05.958852   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.958859   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:05.958865   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:05.958920   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:05.992376   72122 cri.go:89] found id: ""
	I0910 19:02:05.992412   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.992423   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:05.992429   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:05.992485   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:06.028058   72122 cri.go:89] found id: ""
	I0910 19:02:06.028088   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.028098   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:06.028107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:06.028162   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:06.066428   72122 cri.go:89] found id: ""
	I0910 19:02:06.066458   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.066470   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:06.066477   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:06.066533   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:06.102750   72122 cri.go:89] found id: ""
	I0910 19:02:06.102774   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.102782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:06.102787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:06.102841   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:06.137216   72122 cri.go:89] found id: ""
	I0910 19:02:06.137243   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.137254   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:06.137261   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:06.137323   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:06.175227   72122 cri.go:89] found id: ""
	I0910 19:02:06.175251   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.175259   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:06.175265   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:06.175311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:06.210197   72122 cri.go:89] found id: ""
	I0910 19:02:06.210222   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.210230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:06.210238   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:06.210249   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:06.261317   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:06.261353   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:06.275196   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:06.275225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:06.354186   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:06.354205   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:06.354219   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:06.436726   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:06.436763   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:08.979157   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:08.992097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:08.992156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:09.025260   72122 cri.go:89] found id: ""
	I0910 19:02:09.025282   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.025289   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:09.025295   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:09.025360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:09.059139   72122 cri.go:89] found id: ""
	I0910 19:02:09.059166   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.059177   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:09.059186   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:09.059240   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:09.092935   72122 cri.go:89] found id: ""
	I0910 19:02:09.092964   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.092973   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:09.092979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:09.093027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:09.127273   72122 cri.go:89] found id: ""
	I0910 19:02:09.127299   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.127310   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:09.127317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:09.127367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:09.163353   72122 cri.go:89] found id: ""
	I0910 19:02:09.163380   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.163389   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:09.163396   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:09.163453   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:09.195371   72122 cri.go:89] found id: ""
	I0910 19:02:09.195396   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.195407   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:09.195414   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:09.195473   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:09.229338   72122 cri.go:89] found id: ""
	I0910 19:02:09.229361   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.229370   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:09.229376   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:09.229432   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:09.262822   72122 cri.go:89] found id: ""
	I0910 19:02:09.262847   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.262857   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:09.262874   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:09.262891   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:09.330079   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:09.330103   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:09.330119   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:09.408969   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:09.409003   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:09.447666   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:09.447702   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:09.501111   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:09.501141   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.016407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:12.030822   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:12.030905   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:12.069191   72122 cri.go:89] found id: ""
	I0910 19:02:12.069218   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.069229   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:12.069236   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:12.069306   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:12.103687   72122 cri.go:89] found id: ""
	I0910 19:02:12.103726   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.103737   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:12.103862   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:12.103937   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:12.142891   72122 cri.go:89] found id: ""
	I0910 19:02:12.142920   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.142932   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:12.142940   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:12.142998   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:12.178966   72122 cri.go:89] found id: ""
	I0910 19:02:12.178991   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.179002   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:12.179010   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:12.179069   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:12.216070   72122 cri.go:89] found id: ""
	I0910 19:02:12.216093   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.216104   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:12.216112   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:12.216161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:12.251447   72122 cri.go:89] found id: ""
	I0910 19:02:12.251479   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.251492   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:12.251500   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:12.251568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:12.284640   72122 cri.go:89] found id: ""
	I0910 19:02:12.284666   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.284677   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:12.284682   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:12.284743   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:12.319601   72122 cri.go:89] found id: ""
	I0910 19:02:12.319625   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.319632   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:12.319639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:12.319650   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:12.372932   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:12.372965   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.387204   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:12.387228   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:12.459288   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:12.459308   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:12.459323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:12.549161   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:12.549198   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:15.092557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:15.105391   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:15.105456   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:15.139486   72122 cri.go:89] found id: ""
	I0910 19:02:15.139515   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.139524   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:15.139530   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:15.139591   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:15.173604   72122 cri.go:89] found id: ""
	I0910 19:02:15.173630   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.173641   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:15.173648   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:15.173710   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:15.208464   72122 cri.go:89] found id: ""
	I0910 19:02:15.208492   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.208503   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:15.208510   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:15.208568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:15.247536   72122 cri.go:89] found id: ""
	I0910 19:02:15.247567   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.247579   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:15.247586   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:15.247650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:15.285734   72122 cri.go:89] found id: ""
	I0910 19:02:15.285764   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.285775   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:15.285782   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:15.285858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:15.320755   72122 cri.go:89] found id: ""
	I0910 19:02:15.320782   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.320792   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:15.320798   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:15.320849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:15.357355   72122 cri.go:89] found id: ""
	I0910 19:02:15.357384   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.357395   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:15.357402   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:15.357463   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:15.392105   72122 cri.go:89] found id: ""
	I0910 19:02:15.392130   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.392137   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:15.392149   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:15.392160   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:15.444433   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:15.444465   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:15.458759   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:15.458784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:15.523490   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:15.523507   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:15.523524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:15.607584   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:15.607616   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:18.146611   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:18.160311   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:18.160378   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:18.195072   72122 cri.go:89] found id: ""
	I0910 19:02:18.195099   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.195109   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:18.195127   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:18.195201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:18.230099   72122 cri.go:89] found id: ""
	I0910 19:02:18.230129   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.230138   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:18.230145   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:18.230201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:18.268497   72122 cri.go:89] found id: ""
	I0910 19:02:18.268525   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.268534   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:18.268539   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:18.268599   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:18.304929   72122 cri.go:89] found id: ""
	I0910 19:02:18.304966   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.304978   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:18.304985   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:18.305048   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:18.339805   72122 cri.go:89] found id: ""
	I0910 19:02:18.339839   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.339861   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:18.339868   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:18.339925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:18.378353   72122 cri.go:89] found id: ""
	I0910 19:02:18.378372   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.378381   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:18.378393   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:18.378438   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:18.415175   72122 cri.go:89] found id: ""
	I0910 19:02:18.415195   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.415203   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:18.415208   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:18.415262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:18.450738   72122 cri.go:89] found id: ""
	I0910 19:02:18.450762   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.450769   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:18.450778   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:18.450793   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:18.530943   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:18.530975   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:18.568983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:18.569021   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:18.622301   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:18.622336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:18.635788   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:18.635815   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:18.715729   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.216082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:21.229419   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:21.229488   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:21.265152   72122 cri.go:89] found id: ""
	I0910 19:02:21.265183   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.265193   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:21.265201   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:21.265262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:21.300766   72122 cri.go:89] found id: ""
	I0910 19:02:21.300797   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.300815   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:21.300823   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:21.300883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:21.333416   72122 cri.go:89] found id: ""
	I0910 19:02:21.333443   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.333452   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:21.333460   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:21.333526   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:21.371112   72122 cri.go:89] found id: ""
	I0910 19:02:21.371142   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.371150   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:21.371156   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:21.371214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:21.405657   72122 cri.go:89] found id: ""
	I0910 19:02:21.405684   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.405695   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:21.405703   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:21.405755   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:21.440354   72122 cri.go:89] found id: ""
	I0910 19:02:21.440381   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.440392   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:21.440400   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:21.440464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:21.480165   72122 cri.go:89] found id: ""
	I0910 19:02:21.480189   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.480199   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:21.480206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:21.480273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:21.518422   72122 cri.go:89] found id: ""
	I0910 19:02:21.518449   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.518459   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:21.518470   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:21.518486   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:21.572263   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:21.572300   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:21.588179   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:21.588204   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:21.658330   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.658356   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:21.658371   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:21.743026   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:21.743063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:24.286604   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:24.299783   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:24.299847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:24.336998   72122 cri.go:89] found id: ""
	I0910 19:02:24.337031   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.337042   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:24.337050   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:24.337123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:24.374198   72122 cri.go:89] found id: ""
	I0910 19:02:24.374223   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.374231   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:24.374236   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:24.374289   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:24.407783   72122 cri.go:89] found id: ""
	I0910 19:02:24.407812   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.407822   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:24.407828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:24.407881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:24.443285   72122 cri.go:89] found id: ""
	I0910 19:02:24.443307   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.443315   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:24.443321   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:24.443367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:24.477176   72122 cri.go:89] found id: ""
	I0910 19:02:24.477198   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.477206   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:24.477212   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:24.477266   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:24.509762   72122 cri.go:89] found id: ""
	I0910 19:02:24.509783   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.509791   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:24.509797   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:24.509858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:24.548746   72122 cri.go:89] found id: ""
	I0910 19:02:24.548775   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.548785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:24.548793   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:24.548851   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:24.583265   72122 cri.go:89] found id: ""
	I0910 19:02:24.583297   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.583313   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:24.583324   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:24.583338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:24.634966   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:24.635001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:24.649844   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:24.649869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:24.721795   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:24.721824   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:24.721840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:24.807559   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:24.807593   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.352779   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:27.366423   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:27.366495   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:27.399555   72122 cri.go:89] found id: ""
	I0910 19:02:27.399582   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.399591   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:27.399596   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:27.399662   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:27.434151   72122 cri.go:89] found id: ""
	I0910 19:02:27.434179   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.434188   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:27.434194   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:27.434265   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:27.467053   72122 cri.go:89] found id: ""
	I0910 19:02:27.467081   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.467092   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:27.467099   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:27.467156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:27.500999   72122 cri.go:89] found id: ""
	I0910 19:02:27.501030   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.501039   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:27.501044   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:27.501114   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:27.537981   72122 cri.go:89] found id: ""
	I0910 19:02:27.538000   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.538007   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:27.538012   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:27.538060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:27.568622   72122 cri.go:89] found id: ""
	I0910 19:02:27.568649   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.568660   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:27.568668   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:27.568724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:27.603035   72122 cri.go:89] found id: ""
	I0910 19:02:27.603058   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.603067   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:27.603072   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:27.603131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:27.637624   72122 cri.go:89] found id: ""
	I0910 19:02:27.637651   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.637662   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:27.637673   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:27.637693   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:27.651893   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:27.651915   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:27.723949   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:27.723969   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:27.723983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:27.801463   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:27.801496   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.841969   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:27.842000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.398857   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:30.412720   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:30.412790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:30.448125   72122 cri.go:89] found id: ""
	I0910 19:02:30.448152   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.448163   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:30.448171   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:30.448234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:30.481988   72122 cri.go:89] found id: ""
	I0910 19:02:30.482016   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.482027   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:30.482035   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:30.482083   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:30.516548   72122 cri.go:89] found id: ""
	I0910 19:02:30.516576   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.516583   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:30.516589   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:30.516646   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:30.566884   72122 cri.go:89] found id: ""
	I0910 19:02:30.566910   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.566918   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:30.566923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:30.566975   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:30.602278   72122 cri.go:89] found id: ""
	I0910 19:02:30.602306   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.602314   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:30.602319   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:30.602379   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:30.636708   72122 cri.go:89] found id: ""
	I0910 19:02:30.636732   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.636740   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:30.636745   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:30.636797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:30.681255   72122 cri.go:89] found id: ""
	I0910 19:02:30.681280   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.681295   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:30.681303   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:30.681361   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:30.715516   72122 cri.go:89] found id: ""
	I0910 19:02:30.715543   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.715551   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:30.715560   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:30.715572   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.768916   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:30.768948   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:30.783318   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:30.783348   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:30.852901   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:30.852925   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:30.852940   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:30.932276   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:30.932314   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.471931   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:33.486152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:33.486211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:33.524130   72122 cri.go:89] found id: ""
	I0910 19:02:33.524161   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.524173   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:33.524180   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:33.524238   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:33.562216   72122 cri.go:89] found id: ""
	I0910 19:02:33.562238   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.562247   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:33.562252   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:33.562305   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:33.596587   72122 cri.go:89] found id: ""
	I0910 19:02:33.596615   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.596626   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:33.596634   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:33.596692   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:33.633307   72122 cri.go:89] found id: ""
	I0910 19:02:33.633330   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.633338   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:33.633343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:33.633403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:33.667780   72122 cri.go:89] found id: ""
	I0910 19:02:33.667805   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.667815   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:33.667820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:33.667878   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:33.702406   72122 cri.go:89] found id: ""
	I0910 19:02:33.702436   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.702447   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:33.702456   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:33.702524   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:33.744544   72122 cri.go:89] found id: ""
	I0910 19:02:33.744574   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.744581   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:33.744587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:33.744661   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:33.782000   72122 cri.go:89] found id: ""
	I0910 19:02:33.782024   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.782032   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:33.782040   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:33.782053   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:33.858087   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:33.858115   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:33.858133   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:33.943238   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:33.943278   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.987776   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:33.987804   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:34.043197   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:34.043232   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.558122   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:36.571125   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:36.571195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:36.606195   72122 cri.go:89] found id: ""
	I0910 19:02:36.606228   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.606239   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:36.606246   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:36.606304   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:36.640248   72122 cri.go:89] found id: ""
	I0910 19:02:36.640290   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.640302   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:36.640310   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:36.640360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:36.676916   72122 cri.go:89] found id: ""
	I0910 19:02:36.676942   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.676952   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:36.676958   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:36.677013   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:36.713183   72122 cri.go:89] found id: ""
	I0910 19:02:36.713207   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.713218   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:36.713225   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:36.713283   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:36.750748   72122 cri.go:89] found id: ""
	I0910 19:02:36.750775   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.750782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:36.750787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:36.750847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:36.782614   72122 cri.go:89] found id: ""
	I0910 19:02:36.782636   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.782644   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:36.782650   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:36.782709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:36.822051   72122 cri.go:89] found id: ""
	I0910 19:02:36.822083   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.822094   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:36.822102   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:36.822161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:36.856068   72122 cri.go:89] found id: ""
	I0910 19:02:36.856096   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.856106   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:36.856117   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:36.856132   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:36.909586   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:36.909625   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.931649   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:36.931676   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:37.040146   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:37.040175   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:37.040194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:37.121902   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:37.121942   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:39.665474   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:39.678573   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:39.678633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:39.712755   72122 cri.go:89] found id: ""
	I0910 19:02:39.712783   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.712793   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:39.712800   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:39.712857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:39.744709   72122 cri.go:89] found id: ""
	I0910 19:02:39.744738   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.744748   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:39.744756   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:39.744809   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:39.780161   72122 cri.go:89] found id: ""
	I0910 19:02:39.780189   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.780200   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:39.780207   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:39.780255   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:39.817665   72122 cri.go:89] found id: ""
	I0910 19:02:39.817695   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.817704   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:39.817709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:39.817757   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:39.857255   72122 cri.go:89] found id: ""
	I0910 19:02:39.857291   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.857299   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:39.857306   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:39.857381   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:39.893514   72122 cri.go:89] found id: ""
	I0910 19:02:39.893540   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.893550   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:39.893558   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:39.893614   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:39.932720   72122 cri.go:89] found id: ""
	I0910 19:02:39.932753   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.932767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:39.932775   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:39.932835   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:39.977063   72122 cri.go:89] found id: ""
	I0910 19:02:39.977121   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.977135   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:39.977146   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:39.977168   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:39.991414   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:39.991445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:40.066892   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:40.066910   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:40.066922   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:40.150648   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:40.150680   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:40.198519   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:40.198561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:42.749906   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:42.769633   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:42.769703   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:42.812576   72122 cri.go:89] found id: ""
	I0910 19:02:42.812603   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.812613   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:42.812620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:42.812682   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:42.846233   72122 cri.go:89] found id: ""
	I0910 19:02:42.846257   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.846266   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:42.846271   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:42.846326   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:42.883564   72122 cri.go:89] found id: ""
	I0910 19:02:42.883593   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.883605   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:42.883612   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:42.883669   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:42.920774   72122 cri.go:89] found id: ""
	I0910 19:02:42.920801   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.920813   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:42.920820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:42.920883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:42.953776   72122 cri.go:89] found id: ""
	I0910 19:02:42.953808   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.953820   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:42.953829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:42.953887   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:42.989770   72122 cri.go:89] found id: ""
	I0910 19:02:42.989806   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.989820   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:42.989829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:42.989893   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:43.022542   72122 cri.go:89] found id: ""
	I0910 19:02:43.022567   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.022574   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:43.022580   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:43.022629   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:43.064308   72122 cri.go:89] found id: ""
	I0910 19:02:43.064329   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.064337   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:43.064344   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:43.064356   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:43.120212   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:43.120243   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:43.134269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:43.134296   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:43.218840   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:43.218865   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:43.218880   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:43.302560   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:43.302591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:45.842788   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:45.857495   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:45.857557   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:45.892745   72122 cri.go:89] found id: ""
	I0910 19:02:45.892772   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.892782   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:45.892790   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:45.892888   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:45.928451   72122 cri.go:89] found id: ""
	I0910 19:02:45.928476   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.928486   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:45.928493   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:45.928551   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:45.962868   72122 cri.go:89] found id: ""
	I0910 19:02:45.962899   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.962910   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:45.962918   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:45.962979   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:45.996975   72122 cri.go:89] found id: ""
	I0910 19:02:45.997000   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.997009   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:45.997014   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:45.997065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:46.032271   72122 cri.go:89] found id: ""
	I0910 19:02:46.032299   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.032309   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:46.032317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:46.032375   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:46.072629   72122 cri.go:89] found id: ""
	I0910 19:02:46.072654   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.072662   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:46.072667   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:46.072713   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:46.112196   72122 cri.go:89] found id: ""
	I0910 19:02:46.112220   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.112228   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:46.112233   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:46.112298   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:46.155700   72122 cri.go:89] found id: ""
	I0910 19:02:46.155732   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.155745   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:46.155759   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:46.155794   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:46.210596   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:46.210624   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:46.224951   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:46.224980   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:46.294571   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:46.294597   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:46.294613   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:46.382431   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:46.382495   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:48.926582   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:48.941256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:48.941338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:48.979810   72122 cri.go:89] found id: ""
	I0910 19:02:48.979842   72122 logs.go:276] 0 containers: []
	W0910 19:02:48.979849   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:48.979856   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:48.979917   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:49.015083   72122 cri.go:89] found id: ""
	I0910 19:02:49.015126   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.015136   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:49.015144   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:49.015205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:49.052417   72122 cri.go:89] found id: ""
	I0910 19:02:49.052445   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.052453   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:49.052459   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:49.052511   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:49.092485   72122 cri.go:89] found id: ""
	I0910 19:02:49.092523   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.092533   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:49.092538   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:49.092588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:49.127850   72122 cri.go:89] found id: ""
	I0910 19:02:49.127882   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.127889   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:49.127897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:49.127952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:49.160693   72122 cri.go:89] found id: ""
	I0910 19:02:49.160724   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.160733   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:49.160740   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:49.160798   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:49.194713   72122 cri.go:89] found id: ""
	I0910 19:02:49.194737   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.194744   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:49.194750   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:49.194804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:49.229260   72122 cri.go:89] found id: ""
	I0910 19:02:49.229283   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.229292   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:49.229303   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:49.229320   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:49.281963   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:49.281992   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:49.294789   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:49.294809   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:49.366126   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:49.366152   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:49.366172   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:49.451187   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:49.451225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:51.990361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:52.003744   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:52.003807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:52.036794   72122 cri.go:89] found id: ""
	I0910 19:02:52.036824   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.036834   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:52.036840   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:52.036896   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:52.074590   72122 cri.go:89] found id: ""
	I0910 19:02:52.074613   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.074620   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:52.074625   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:52.074675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:52.119926   72122 cri.go:89] found id: ""
	I0910 19:02:52.119967   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.119981   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:52.119990   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:52.120075   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:52.157862   72122 cri.go:89] found id: ""
	I0910 19:02:52.157889   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.157900   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:52.157906   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:52.157968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:52.198645   72122 cri.go:89] found id: ""
	I0910 19:02:52.198675   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.198686   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:52.198693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:52.198756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:52.240091   72122 cri.go:89] found id: ""
	I0910 19:02:52.240113   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.240129   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:52.240139   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:52.240197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:52.275046   72122 cri.go:89] found id: ""
	I0910 19:02:52.275079   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.275090   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:52.275098   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:52.275179   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:52.311141   72122 cri.go:89] found id: ""
	I0910 19:02:52.311172   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.311184   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:52.311196   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:52.311211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:52.400004   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:52.400039   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:52.449043   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:52.449090   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:52.502304   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:52.502336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:52.518747   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:52.518772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:52.593581   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.094092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:55.108752   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:55.108830   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:55.143094   72122 cri.go:89] found id: ""
	I0910 19:02:55.143122   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.143133   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:55.143141   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:55.143198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:55.184298   72122 cri.go:89] found id: ""
	I0910 19:02:55.184326   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.184334   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:55.184340   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:55.184397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:55.216557   72122 cri.go:89] found id: ""
	I0910 19:02:55.216585   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.216596   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:55.216613   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:55.216676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:55.251049   72122 cri.go:89] found id: ""
	I0910 19:02:55.251075   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.251083   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:55.251090   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:55.251152   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:55.282689   72122 cri.go:89] found id: ""
	I0910 19:02:55.282716   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.282724   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:55.282729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:55.282800   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:55.316959   72122 cri.go:89] found id: ""
	I0910 19:02:55.316993   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.317004   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:55.317011   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:55.317085   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:55.353110   72122 cri.go:89] found id: ""
	I0910 19:02:55.353134   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.353143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:55.353149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:55.353205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:55.392391   72122 cri.go:89] found id: ""
	I0910 19:02:55.392422   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.392434   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:55.392446   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:55.392461   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:55.445431   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:55.445469   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:55.459348   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:55.459374   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:55.528934   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.528957   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:55.528973   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:55.610797   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:55.610833   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:58.152775   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:58.166383   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:58.166440   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:58.203198   72122 cri.go:89] found id: ""
	I0910 19:02:58.203225   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.203233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:58.203239   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:58.203284   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:58.240538   72122 cri.go:89] found id: ""
	I0910 19:02:58.240560   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.240567   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:58.240573   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:58.240633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:58.274802   72122 cri.go:89] found id: ""
	I0910 19:02:58.274826   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.274833   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:58.274839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:58.274886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:58.311823   72122 cri.go:89] found id: ""
	I0910 19:02:58.311857   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.311868   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:58.311876   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:58.311933   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:58.347260   72122 cri.go:89] found id: ""
	I0910 19:02:58.347281   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.347288   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:58.347294   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:58.347338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:58.382621   72122 cri.go:89] found id: ""
	I0910 19:02:58.382645   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.382655   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:58.382662   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:58.382720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:58.418572   72122 cri.go:89] found id: ""
	I0910 19:02:58.418597   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.418605   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:58.418611   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:58.418663   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:58.459955   72122 cri.go:89] found id: ""
	I0910 19:02:58.459987   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.459995   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:58.460003   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:58.460016   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:58.512831   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:58.512866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:58.527036   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:58.527067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:58.593329   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:58.593350   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:58.593366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:58.671171   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:58.671201   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.211905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:01.226567   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:01.226724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:01.261860   72122 cri.go:89] found id: ""
	I0910 19:03:01.261885   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.261893   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:01.261898   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:01.261946   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:01.294754   72122 cri.go:89] found id: ""
	I0910 19:03:01.294774   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.294781   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:01.294786   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:01.294833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:01.328378   72122 cri.go:89] found id: ""
	I0910 19:03:01.328403   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.328412   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:01.328417   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:01.328465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:01.363344   72122 cri.go:89] found id: ""
	I0910 19:03:01.363370   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.363380   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:01.363388   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:01.363446   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:01.398539   72122 cri.go:89] found id: ""
	I0910 19:03:01.398576   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.398586   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:01.398593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:01.398654   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:01.431367   72122 cri.go:89] found id: ""
	I0910 19:03:01.431390   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.431397   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:01.431403   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:01.431458   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:01.464562   72122 cri.go:89] found id: ""
	I0910 19:03:01.464589   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.464599   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:01.464606   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:01.464666   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:01.497493   72122 cri.go:89] found id: ""
	I0910 19:03:01.497520   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.497531   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:01.497540   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:01.497555   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:01.583083   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:01.583140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.624887   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:01.624919   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:01.676124   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:01.676155   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:01.690861   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:01.690894   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:01.763695   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:04.264867   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:04.279106   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:04.279176   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:04.315358   72122 cri.go:89] found id: ""
	I0910 19:03:04.315390   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.315398   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:04.315403   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:04.315457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:04.359466   72122 cri.go:89] found id: ""
	I0910 19:03:04.359489   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.359496   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:04.359504   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:04.359563   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:04.399504   72122 cri.go:89] found id: ""
	I0910 19:03:04.399529   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.399538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:04.399545   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:04.399604   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:04.438244   72122 cri.go:89] found id: ""
	I0910 19:03:04.438269   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.438277   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:04.438282   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:04.438340   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:04.475299   72122 cri.go:89] found id: ""
	I0910 19:03:04.475321   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.475329   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:04.475334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:04.475386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:04.516500   72122 cri.go:89] found id: ""
	I0910 19:03:04.516520   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.516529   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:04.516534   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:04.516588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:04.551191   72122 cri.go:89] found id: ""
	I0910 19:03:04.551214   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.551222   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:04.551228   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:04.551273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:04.585646   72122 cri.go:89] found id: ""
	I0910 19:03:04.585667   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.585675   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:04.585684   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:04.585699   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:04.598832   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:04.598858   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:04.670117   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:04.670140   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:04.670156   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:04.746592   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:04.746626   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:04.784061   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:04.784088   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.337082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:07.350696   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:07.350752   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:07.387344   72122 cri.go:89] found id: ""
	I0910 19:03:07.387373   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.387384   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:07.387391   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:07.387449   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:07.420468   72122 cri.go:89] found id: ""
	I0910 19:03:07.420490   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.420498   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:07.420503   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:07.420566   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:07.453746   72122 cri.go:89] found id: ""
	I0910 19:03:07.453773   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.453784   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:07.453791   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:07.453845   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:07.487359   72122 cri.go:89] found id: ""
	I0910 19:03:07.487388   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.487400   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:07.487407   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:07.487470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:07.520803   72122 cri.go:89] found id: ""
	I0910 19:03:07.520827   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.520834   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:07.520839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:07.520898   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:07.556908   72122 cri.go:89] found id: ""
	I0910 19:03:07.556934   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.556945   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:07.556953   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:07.557017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:07.596072   72122 cri.go:89] found id: ""
	I0910 19:03:07.596093   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.596102   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:07.596107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:07.596165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:07.631591   72122 cri.go:89] found id: ""
	I0910 19:03:07.631620   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.631630   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:07.631639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:07.631661   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.683892   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:07.683923   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:07.697619   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:07.697645   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:07.766370   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:07.766397   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:07.766413   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:07.854102   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:07.854140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:10.400185   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:10.412771   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:10.412842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:10.447710   72122 cri.go:89] found id: ""
	I0910 19:03:10.447739   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.447750   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:10.447757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:10.447822   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:10.480865   72122 cri.go:89] found id: ""
	I0910 19:03:10.480892   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.480902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:10.480909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:10.480966   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:10.514893   72122 cri.go:89] found id: ""
	I0910 19:03:10.514919   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.514927   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:10.514933   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:10.514994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:10.556332   72122 cri.go:89] found id: ""
	I0910 19:03:10.556374   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.556385   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:10.556392   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:10.556457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:10.590529   72122 cri.go:89] found id: ""
	I0910 19:03:10.590562   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.590573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:10.590581   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:10.590642   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:10.623697   72122 cri.go:89] found id: ""
	I0910 19:03:10.623724   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.623732   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:10.623737   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:10.623788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:10.659236   72122 cri.go:89] found id: ""
	I0910 19:03:10.659259   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.659270   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:10.659277   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:10.659338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:10.693150   72122 cri.go:89] found id: ""
	I0910 19:03:10.693182   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.693192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:10.693202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:10.693217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:10.744624   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:10.744663   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:10.758797   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:10.758822   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:10.853796   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:10.853815   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:10.853827   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:10.937972   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:10.938019   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:13.481898   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:13.495440   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:13.495505   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:13.531423   72122 cri.go:89] found id: ""
	I0910 19:03:13.531452   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.531463   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:13.531470   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:13.531532   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:13.571584   72122 cri.go:89] found id: ""
	I0910 19:03:13.571607   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.571615   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:13.571620   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:13.571674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:13.609670   72122 cri.go:89] found id: ""
	I0910 19:03:13.609695   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.609702   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:13.609707   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:13.609761   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:13.644726   72122 cri.go:89] found id: ""
	I0910 19:03:13.644755   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.644766   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:13.644773   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:13.644831   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:13.679692   72122 cri.go:89] found id: ""
	I0910 19:03:13.679722   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.679733   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:13.679741   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:13.679791   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:13.717148   72122 cri.go:89] found id: ""
	I0910 19:03:13.717177   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.717186   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:13.717192   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:13.717247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:13.755650   72122 cri.go:89] found id: ""
	I0910 19:03:13.755676   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.755688   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:13.755693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:13.755740   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:13.788129   72122 cri.go:89] found id: ""
	I0910 19:03:13.788158   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.788169   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:13.788179   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:13.788194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:13.865241   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:13.865277   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:13.909205   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:13.909233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:13.963495   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:13.963523   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:13.977311   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:13.977337   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:14.047015   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:16.547505   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:16.568333   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:16.568412   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:16.610705   72122 cri.go:89] found id: ""
	I0910 19:03:16.610734   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.610744   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:16.610752   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:16.610808   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:16.647307   72122 cri.go:89] found id: ""
	I0910 19:03:16.647333   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.647340   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:16.647345   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:16.647409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:16.684513   72122 cri.go:89] found id: ""
	I0910 19:03:16.684536   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.684544   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:16.684549   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:16.684602   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:16.718691   72122 cri.go:89] found id: ""
	I0910 19:03:16.718719   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.718729   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:16.718734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:16.718794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:16.753250   72122 cri.go:89] found id: ""
	I0910 19:03:16.753279   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.753291   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:16.753298   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:16.753358   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:16.788953   72122 cri.go:89] found id: ""
	I0910 19:03:16.788984   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.789001   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:16.789009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:16.789084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:16.823715   72122 cri.go:89] found id: ""
	I0910 19:03:16.823746   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.823760   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:16.823767   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:16.823837   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:16.858734   72122 cri.go:89] found id: ""
	I0910 19:03:16.858758   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.858770   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:16.858780   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:16.858795   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:16.897983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:16.898012   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:16.950981   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:16.951015   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:16.964809   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:16.964839   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:17.039142   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:17.039163   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:17.039177   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:19.619941   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:19.634432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:19.634489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:19.671220   72122 cri.go:89] found id: ""
	I0910 19:03:19.671246   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.671256   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:19.671264   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:19.671322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:19.704251   72122 cri.go:89] found id: ""
	I0910 19:03:19.704278   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.704294   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:19.704301   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:19.704347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:19.745366   72122 cri.go:89] found id: ""
	I0910 19:03:19.745393   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.745403   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:19.745410   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:19.745466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:19.781100   72122 cri.go:89] found id: ""
	I0910 19:03:19.781129   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.781136   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:19.781141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:19.781195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:19.817177   72122 cri.go:89] found id: ""
	I0910 19:03:19.817207   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.817219   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:19.817226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:19.817292   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:19.852798   72122 cri.go:89] found id: ""
	I0910 19:03:19.852829   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.852837   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:19.852842   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:19.852889   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:19.887173   72122 cri.go:89] found id: ""
	I0910 19:03:19.887200   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.887210   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:19.887219   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:19.887409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:19.922997   72122 cri.go:89] found id: ""
	I0910 19:03:19.923026   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.923038   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:19.923049   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:19.923063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:19.975703   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:19.975736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:19.989834   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:19.989866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:20.061312   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:20.061332   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:20.061344   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:20.143045   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:20.143080   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:22.681900   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:22.694860   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:22.694923   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:22.738529   72122 cri.go:89] found id: ""
	I0910 19:03:22.738553   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.738563   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:22.738570   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:22.738640   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:22.778102   72122 cri.go:89] found id: ""
	I0910 19:03:22.778132   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.778143   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:22.778150   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:22.778207   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:22.813273   72122 cri.go:89] found id: ""
	I0910 19:03:22.813307   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.813320   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:22.813334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:22.813397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:22.849613   72122 cri.go:89] found id: ""
	I0910 19:03:22.849637   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.849646   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:22.849651   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:22.849701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:22.883138   72122 cri.go:89] found id: ""
	I0910 19:03:22.883167   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.883178   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:22.883185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:22.883237   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:22.918521   72122 cri.go:89] found id: ""
	I0910 19:03:22.918550   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.918567   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:22.918574   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:22.918632   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:22.966657   72122 cri.go:89] found id: ""
	I0910 19:03:22.966684   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.966691   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:22.966701   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:22.966762   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:23.022254   72122 cri.go:89] found id: ""
	I0910 19:03:23.022282   72122 logs.go:276] 0 containers: []
	W0910 19:03:23.022290   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:23.022298   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:23.022309   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:23.082347   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:23.082386   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:23.096792   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:23.096814   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:23.172720   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:23.172740   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:23.172754   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:23.256155   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:23.256193   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:25.797211   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:25.810175   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:25.810234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:25.844848   72122 cri.go:89] found id: ""
	I0910 19:03:25.844876   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.844886   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:25.844901   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:25.844968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:25.877705   72122 cri.go:89] found id: ""
	I0910 19:03:25.877736   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.877747   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:25.877755   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:25.877807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:25.913210   72122 cri.go:89] found id: ""
	I0910 19:03:25.913238   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.913248   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:25.913256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:25.913316   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:25.947949   72122 cri.go:89] found id: ""
	I0910 19:03:25.947974   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.947984   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:25.947991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:25.948050   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:25.983487   72122 cri.go:89] found id: ""
	I0910 19:03:25.983511   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.983519   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:25.983524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:25.983573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:26.018176   72122 cri.go:89] found id: ""
	I0910 19:03:26.018201   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.018209   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:26.018214   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:26.018271   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:26.052063   72122 cri.go:89] found id: ""
	I0910 19:03:26.052087   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.052097   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:26.052104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:26.052165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:26.091919   72122 cri.go:89] found id: ""
	I0910 19:03:26.091949   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.091958   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:26.091968   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:26.091983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:26.146059   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:26.146094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:26.160529   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:26.160562   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:26.230742   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:26.230764   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:26.230778   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:26.313191   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:26.313222   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:28.858457   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:28.873725   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:28.873788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:28.922685   72122 cri.go:89] found id: ""
	I0910 19:03:28.922717   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.922729   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:28.922737   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:28.922795   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:28.973236   72122 cri.go:89] found id: ""
	I0910 19:03:28.973260   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.973270   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:28.973277   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:28.973339   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:29.008999   72122 cri.go:89] found id: ""
	I0910 19:03:29.009049   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.009062   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:29.009081   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:29.009148   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:29.049009   72122 cri.go:89] found id: ""
	I0910 19:03:29.049037   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.049047   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:29.049056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:29.049131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:29.089543   72122 cri.go:89] found id: ""
	I0910 19:03:29.089564   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.089573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:29.089578   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:29.089648   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:29.126887   72122 cri.go:89] found id: ""
	I0910 19:03:29.126911   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.126918   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:29.126924   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:29.126969   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:29.161369   72122 cri.go:89] found id: ""
	I0910 19:03:29.161395   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.161405   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:29.161412   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:29.161474   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:29.199627   72122 cri.go:89] found id: ""
	I0910 19:03:29.199652   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.199661   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:29.199672   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:29.199691   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:29.268353   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:29.268386   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:29.268401   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:29.351470   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:29.351504   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:29.391768   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:29.391796   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:29.442705   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:29.442736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:31.957567   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:31.970218   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:31.970274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:32.004870   72122 cri.go:89] found id: ""
	I0910 19:03:32.004898   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.004908   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:32.004915   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:32.004971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:32.045291   72122 cri.go:89] found id: ""
	I0910 19:03:32.045322   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.045331   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:32.045337   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:32.045403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:32.085969   72122 cri.go:89] found id: ""
	I0910 19:03:32.085999   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.086007   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:32.086013   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:32.086067   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:32.120100   72122 cri.go:89] found id: ""
	I0910 19:03:32.120127   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.120135   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:32.120141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:32.120187   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:32.153977   72122 cri.go:89] found id: ""
	I0910 19:03:32.154004   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.154011   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:32.154016   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:32.154065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:32.195980   72122 cri.go:89] found id: ""
	I0910 19:03:32.196005   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.196013   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:32.196019   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:32.196068   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:32.233594   72122 cri.go:89] found id: ""
	I0910 19:03:32.233616   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.233623   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:32.233632   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:32.233677   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:32.268118   72122 cri.go:89] found id: ""
	I0910 19:03:32.268144   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.268152   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:32.268160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:32.268171   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:32.281389   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:32.281416   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:32.359267   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:32.359289   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:32.359304   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:32.445096   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:32.445137   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:32.483288   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:32.483325   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:35.040393   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:35.053698   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:35.053750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:35.087712   72122 cri.go:89] found id: ""
	I0910 19:03:35.087742   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.087751   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:35.087757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:35.087802   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:35.125437   72122 cri.go:89] found id: ""
	I0910 19:03:35.125468   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.125482   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:35.125495   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:35.125562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:35.163885   72122 cri.go:89] found id: ""
	I0910 19:03:35.163914   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.163924   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:35.163931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:35.163989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:35.199426   72122 cri.go:89] found id: ""
	I0910 19:03:35.199459   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.199471   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:35.199479   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:35.199559   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:35.236388   72122 cri.go:89] found id: ""
	I0910 19:03:35.236408   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.236416   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:35.236421   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:35.236465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:35.274797   72122 cri.go:89] found id: ""
	I0910 19:03:35.274817   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.274825   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:35.274830   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:35.274874   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:35.308127   72122 cri.go:89] found id: ""
	I0910 19:03:35.308155   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.308166   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:35.308173   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:35.308230   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:35.340675   72122 cri.go:89] found id: ""
	I0910 19:03:35.340697   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.340704   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:35.340712   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:35.340727   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:35.390806   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:35.390842   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:35.404427   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:35.404458   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:35.471526   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:35.471560   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:35.471575   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:35.547469   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:35.547497   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:38.087127   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:38.100195   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:38.100251   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:38.135386   72122 cri.go:89] found id: ""
	I0910 19:03:38.135408   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.135416   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:38.135422   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:38.135480   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:38.168531   72122 cri.go:89] found id: ""
	I0910 19:03:38.168558   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.168568   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:38.168577   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:38.168639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:38.202931   72122 cri.go:89] found id: ""
	I0910 19:03:38.202958   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.202968   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:38.202974   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:38.203030   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:38.239185   72122 cri.go:89] found id: ""
	I0910 19:03:38.239209   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.239219   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:38.239226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:38.239279   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:38.276927   72122 cri.go:89] found id: ""
	I0910 19:03:38.276952   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.276961   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:38.276967   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:38.277035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:38.311923   72122 cri.go:89] found id: ""
	I0910 19:03:38.311951   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.311962   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:38.311971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:38.312034   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:38.344981   72122 cri.go:89] found id: ""
	I0910 19:03:38.345012   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.345023   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:38.345030   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:38.345099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:38.378012   72122 cri.go:89] found id: ""
	I0910 19:03:38.378037   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.378048   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:38.378058   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:38.378076   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:38.449361   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:38.449384   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:38.449396   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:38.530683   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:38.530713   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:38.570047   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:38.570073   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:38.620143   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:38.620176   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.134152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:41.148416   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:41.148509   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:41.186681   72122 cri.go:89] found id: ""
	I0910 19:03:41.186706   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.186713   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:41.186719   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:41.186767   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:41.221733   72122 cri.go:89] found id: ""
	I0910 19:03:41.221758   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.221769   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:41.221776   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:41.221834   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:41.256099   72122 cri.go:89] found id: ""
	I0910 19:03:41.256125   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.256136   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:41.256143   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:41.256194   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:41.289825   72122 cri.go:89] found id: ""
	I0910 19:03:41.289850   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.289860   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:41.289867   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:41.289926   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:41.323551   72122 cri.go:89] found id: ""
	I0910 19:03:41.323581   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.323594   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:41.323601   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:41.323659   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:41.356508   72122 cri.go:89] found id: ""
	I0910 19:03:41.356535   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.356546   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:41.356553   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:41.356608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:41.391556   72122 cri.go:89] found id: ""
	I0910 19:03:41.391579   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.391586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:41.391592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:41.391651   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:41.427685   72122 cri.go:89] found id: ""
	I0910 19:03:41.427711   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.427726   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:41.427743   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:41.427758   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:41.481970   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:41.482001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.495266   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:41.495290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:41.568334   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:41.568357   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:41.568370   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:41.650178   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:41.650211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:44.193665   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:44.209118   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:44.209197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:44.245792   72122 cri.go:89] found id: ""
	I0910 19:03:44.245819   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.245829   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:44.245834   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:44.245900   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:44.285673   72122 cri.go:89] found id: ""
	I0910 19:03:44.285699   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.285711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:44.285719   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:44.285787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:44.326471   72122 cri.go:89] found id: ""
	I0910 19:03:44.326495   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.326505   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:44.326520   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:44.326589   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:44.367864   72122 cri.go:89] found id: ""
	I0910 19:03:44.367890   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.367898   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:44.367907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:44.367954   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:44.407161   72122 cri.go:89] found id: ""
	I0910 19:03:44.407185   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.407193   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:44.407198   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:44.407256   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:44.446603   72122 cri.go:89] found id: ""
	I0910 19:03:44.446628   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.446638   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:44.446645   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:44.446705   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:44.486502   72122 cri.go:89] found id: ""
	I0910 19:03:44.486526   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.486536   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:44.486543   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:44.486605   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:44.524992   72122 cri.go:89] found id: ""
	I0910 19:03:44.525017   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.525025   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:44.525033   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:44.525044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:44.579387   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:44.579418   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:44.594045   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:44.594070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:44.678857   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:44.678883   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:44.678897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:44.763799   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:44.763830   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:47.305631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:47.319275   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:47.319347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:47.359199   72122 cri.go:89] found id: ""
	I0910 19:03:47.359222   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.359233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:47.359240   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:47.359300   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:47.397579   72122 cri.go:89] found id: ""
	I0910 19:03:47.397602   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.397610   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:47.397616   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:47.397674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:47.431114   72122 cri.go:89] found id: ""
	I0910 19:03:47.431138   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.431146   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:47.431151   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:47.431205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:47.470475   72122 cri.go:89] found id: ""
	I0910 19:03:47.470499   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.470509   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:47.470515   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:47.470573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:47.504484   72122 cri.go:89] found id: ""
	I0910 19:03:47.504509   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.504518   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:47.504524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:47.504577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:47.541633   72122 cri.go:89] found id: ""
	I0910 19:03:47.541651   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.541658   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:47.541663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:47.541706   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:47.579025   72122 cri.go:89] found id: ""
	I0910 19:03:47.579051   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.579060   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:47.579068   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:47.579123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:47.612333   72122 cri.go:89] found id: ""
	I0910 19:03:47.612359   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.612370   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:47.612380   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:47.612395   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:47.667214   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:47.667242   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:47.683425   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:47.683466   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:47.749510   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:47.749531   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:47.749543   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:47.830454   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:47.830487   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:50.373207   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:50.387191   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:50.387247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:50.422445   72122 cri.go:89] found id: ""
	I0910 19:03:50.422476   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.422488   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:50.422495   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:50.422562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:50.456123   72122 cri.go:89] found id: ""
	I0910 19:03:50.456145   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.456153   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:50.456157   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:50.456211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:50.488632   72122 cri.go:89] found id: ""
	I0910 19:03:50.488661   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.488672   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:50.488680   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:50.488736   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:50.523603   72122 cri.go:89] found id: ""
	I0910 19:03:50.523628   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.523636   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:50.523641   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:50.523699   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:50.559741   72122 cri.go:89] found id: ""
	I0910 19:03:50.559765   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.559773   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:50.559778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:50.559842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:50.595387   72122 cri.go:89] found id: ""
	I0910 19:03:50.595406   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.595414   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:50.595420   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:50.595472   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:50.628720   72122 cri.go:89] found id: ""
	I0910 19:03:50.628747   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.628767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:50.628774   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:50.628833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:50.660635   72122 cri.go:89] found id: ""
	I0910 19:03:50.660655   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.660663   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:50.660671   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:50.660683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:50.716517   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:50.716544   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:50.731411   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:50.731443   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:50.799252   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:50.799275   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:50.799290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:50.874490   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:50.874524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.417835   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:53.430627   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:53.430694   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:53.469953   72122 cri.go:89] found id: ""
	I0910 19:03:53.469981   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.469992   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:53.469999   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:53.470060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:53.503712   72122 cri.go:89] found id: ""
	I0910 19:03:53.503739   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.503750   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:53.503757   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:53.503814   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:53.539875   72122 cri.go:89] found id: ""
	I0910 19:03:53.539895   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.539902   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:53.539907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:53.539952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:53.575040   72122 cri.go:89] found id: ""
	I0910 19:03:53.575067   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.575078   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:53.575085   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:53.575159   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:53.611171   72122 cri.go:89] found id: ""
	I0910 19:03:53.611193   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.611201   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:53.611206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:53.611253   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:53.644467   72122 cri.go:89] found id: ""
	I0910 19:03:53.644494   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.644505   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:53.644513   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:53.644575   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:53.680886   72122 cri.go:89] found id: ""
	I0910 19:03:53.680913   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.680924   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:53.680931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:53.680989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:53.716834   72122 cri.go:89] found id: ""
	I0910 19:03:53.716863   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.716875   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:53.716885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:53.716900   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.755544   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:53.755568   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:53.807382   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:53.807411   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:53.820289   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:53.820311   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:53.891500   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:53.891524   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:53.891540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:56.472368   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:56.491939   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:56.492020   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:56.535575   72122 cri.go:89] found id: ""
	I0910 19:03:56.535603   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.535614   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:56.535620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:56.535672   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:56.570366   72122 cri.go:89] found id: ""
	I0910 19:03:56.570390   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.570398   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:56.570403   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:56.570452   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:56.609486   72122 cri.go:89] found id: ""
	I0910 19:03:56.609524   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.609535   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:56.609542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:56.609608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:56.650268   72122 cri.go:89] found id: ""
	I0910 19:03:56.650295   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.650305   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:56.650312   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:56.650371   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:56.689113   72122 cri.go:89] found id: ""
	I0910 19:03:56.689139   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.689146   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:56.689154   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:56.689214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:56.721546   72122 cri.go:89] found id: ""
	I0910 19:03:56.721568   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.721576   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:56.721582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:56.721639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:56.753149   72122 cri.go:89] found id: ""
	I0910 19:03:56.753171   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.753179   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:56.753185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:56.753233   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:56.786624   72122 cri.go:89] found id: ""
	I0910 19:03:56.786648   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.786658   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:56.786669   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.786683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.840243   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:56.840276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:56.854453   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:56.854475   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:56.928814   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:56.928835   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:56.928849   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:57.012360   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:57.012403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:59.558561   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.572993   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.573094   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.618957   72122 cri.go:89] found id: ""
	I0910 19:03:59.618988   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.618999   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:59.619008   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.619072   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.662544   72122 cri.go:89] found id: ""
	I0910 19:03:59.662643   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.662661   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:59.662673   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.662750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.704323   72122 cri.go:89] found id: ""
	I0910 19:03:59.704349   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.704360   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:59.704367   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.704426   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.738275   72122 cri.go:89] found id: ""
	I0910 19:03:59.738301   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.738311   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:59.738317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.738367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.778887   72122 cri.go:89] found id: ""
	I0910 19:03:59.778922   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.778934   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:59.778944   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.779010   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.814953   72122 cri.go:89] found id: ""
	I0910 19:03:59.814985   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.814995   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:59.815003   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.815064   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.850016   72122 cri.go:89] found id: ""
	I0910 19:03:59.850048   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.850061   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.850069   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:59.850131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:59.887546   72122 cri.go:89] found id: ""
	I0910 19:03:59.887589   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.887600   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:59.887613   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:59.887632   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:59.938761   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.938784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.954572   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:59.954603   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:04:00.029593   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:04:00.029622   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:00.029638   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.121427   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.121462   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:02.660924   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:02.674661   72122 kubeadm.go:597] duration metric: took 4m3.166175956s to restartPrimaryControlPlane
	W0910 19:04:02.674744   72122 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:04:02.674769   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:04:03.133507   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:03.150426   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:03.161678   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:03.173362   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:03.173389   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:03.173436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:03.183872   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:03.183934   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:03.193891   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:03.203385   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:03.203450   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:03.216255   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.227938   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:03.228001   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.240799   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:03.252871   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:03.252922   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:03.263682   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:03.337478   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:04:03.337564   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:03.506276   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:03.506454   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:03.506587   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:04:03.697062   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:03.698908   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:03.699004   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:03.699083   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:03.699184   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:03.699270   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:03.699361   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:03.699517   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:03.700040   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:03.700773   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:03.701529   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:03.702334   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:03.702627   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:03.702715   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:03.929760   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:03.992724   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:04.087552   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:04.226550   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:04.244695   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:04.246125   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:04.246187   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:04.396099   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:04.397627   72122 out.go:235]   - Booting up control plane ...
	I0910 19:04:04.397763   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:04.405199   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:04.407281   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:04.408182   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:04.411438   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:04:44.413134   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:04:44.413215   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:44.413400   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:49.413796   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:49.413967   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:59.414341   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:59.414514   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:19.415680   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:19.415950   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.417770   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:59.418015   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.418035   72122 kubeadm.go:310] 
	I0910 19:05:59.418101   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:05:59.418137   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:05:59.418143   72122 kubeadm.go:310] 
	I0910 19:05:59.418178   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:05:59.418207   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:05:59.418313   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:05:59.418321   72122 kubeadm.go:310] 
	I0910 19:05:59.418443   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:05:59.418477   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:05:59.418519   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:05:59.418527   72122 kubeadm.go:310] 
	I0910 19:05:59.418625   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:05:59.418723   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:05:59.418731   72122 kubeadm.go:310] 
	I0910 19:05:59.418869   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:05:59.418976   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:05:59.419045   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:05:59.419141   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:05:59.419152   72122 kubeadm.go:310] 
	I0910 19:05:59.420015   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:05:59.420093   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:05:59.420165   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0910 19:05:59.420289   72122 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0910 19:05:59.420339   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:06:04.848652   72122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.428289133s)
	I0910 19:06:04.848719   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:06:04.862914   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:06:04.872903   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:06:04.872920   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:06:04.872960   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:06:04.882109   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:06:04.882168   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:06:04.890962   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:06:04.899925   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:06:04.899985   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:06:04.908796   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.917123   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:06:04.917173   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.925821   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:06:04.937885   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:06:04.937963   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:06:04.948108   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:06:05.019246   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:06:05.019321   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:06:05.162639   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:06:05.162770   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:06:05.162918   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:06:05.343270   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:06:05.345092   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:06:05.345189   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:06:05.345299   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:06:05.345417   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:06:05.345497   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:06:05.345606   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:06:05.345718   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:06:05.345981   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:06:05.346367   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:06:05.346822   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:06:05.347133   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:06:05.347246   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:06:05.347346   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:06:05.536681   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:06:05.773929   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:06:05.994857   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:06:06.139145   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:06:06.154510   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:06:06.155479   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:06:06.155548   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:06:06.311520   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:06:06.314167   72122 out.go:235]   - Booting up control plane ...
	I0910 19:06:06.314311   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:06:06.320856   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:06:06.321801   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:06:06.322508   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:06:06.324744   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:06:46.327168   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:06:46.327286   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:46.327534   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:06:51.328423   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:51.328643   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:01.329028   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:01.329315   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:21.329371   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:21.329627   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328238   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:08:01.328535   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328566   72122 kubeadm.go:310] 
	I0910 19:08:01.328625   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:08:01.328688   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:08:01.328701   72122 kubeadm.go:310] 
	I0910 19:08:01.328749   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:08:01.328797   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:08:01.328941   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:08:01.328953   72122 kubeadm.go:310] 
	I0910 19:08:01.329068   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:08:01.329136   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:08:01.329177   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:08:01.329191   72122 kubeadm.go:310] 
	I0910 19:08:01.329310   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:08:01.329377   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:08:01.329383   72122 kubeadm.go:310] 
	I0910 19:08:01.329468   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:08:01.329539   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:08:01.329607   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:08:01.329667   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:08:01.329674   72122 kubeadm.go:310] 
	I0910 19:08:01.330783   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:08:01.330892   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:08:01.330963   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 19:08:01.331020   72122 kubeadm.go:394] duration metric: took 8m1.874926868s to StartCluster
	I0910 19:08:01.331061   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:08:01.331117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:08:01.385468   72122 cri.go:89] found id: ""
	I0910 19:08:01.385492   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.385499   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:08:01.385505   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:08:01.385571   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:08:01.424028   72122 cri.go:89] found id: ""
	I0910 19:08:01.424051   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.424060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:08:01.424064   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:08:01.424121   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:08:01.462946   72122 cri.go:89] found id: ""
	I0910 19:08:01.462973   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.462983   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:08:01.462991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:08:01.463045   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:08:01.498242   72122 cri.go:89] found id: ""
	I0910 19:08:01.498269   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.498278   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:08:01.498283   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:08:01.498329   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:08:01.532917   72122 cri.go:89] found id: ""
	I0910 19:08:01.532946   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.532953   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:08:01.532959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:08:01.533011   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:08:01.567935   72122 cri.go:89] found id: ""
	I0910 19:08:01.567959   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.567967   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:08:01.567973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:08:01.568027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:08:01.601393   72122 cri.go:89] found id: ""
	I0910 19:08:01.601418   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.601426   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:08:01.601432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:08:01.601489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:08:01.639307   72122 cri.go:89] found id: ""
	I0910 19:08:01.639335   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.639345   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:08:01.639358   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:08:01.639373   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:08:01.726566   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:08:01.726591   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:08:01.726614   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:08:01.839965   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:08:01.840004   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:08:01.879658   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:08:01.879687   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:08:01.939066   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:08:01.939102   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0910 19:08:01.955390   72122 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0910 19:08:01.955436   72122 out.go:270] * 
	* 
	W0910 19:08:01.955500   72122 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.955524   72122 out.go:270] * 
	* 
	W0910 19:08:01.956343   72122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 19:08:01.959608   72122 out.go:201] 
	W0910 19:08:01.960877   72122 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.960929   72122 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0910 19:08:01.960957   72122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0910 19:08:01.962345   72122 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-432422 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 2 (242.994895ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-432422 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-432422 logs -n 25: (1.6469137s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-642043 sudo cat                              | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo find                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo crio                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-642043                                       | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-186737 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | disable-driver-mounts-186737                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-836868            | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-347802             | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-557504  | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-836868                 | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-432422        | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-347802                  | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-557504       | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-432422             | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:56:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:56:02.487676   72122 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:56:02.487789   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487799   72122 out.go:358] Setting ErrFile to fd 2...
	I0910 18:56:02.487804   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487953   72122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:56:02.488491   72122 out.go:352] Setting JSON to false
	I0910 18:56:02.489572   72122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5914,"bootTime":1725988648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:56:02.489637   72122 start.go:139] virtualization: kvm guest
	I0910 18:56:02.491991   72122 out.go:177] * [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:56:02.493117   72122 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:56:02.493113   72122 notify.go:220] Checking for updates...
	I0910 18:56:02.494213   72122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:56:02.495356   72122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:56:02.496370   72122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:56:02.497440   72122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:56:02.498703   72122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:56:02.500450   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:56:02.501100   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.501150   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.515836   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0910 18:56:02.516286   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.516787   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.516815   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.517116   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.517300   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.519092   72122 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 18:56:02.520121   72122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:56:02.520405   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.520436   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.534860   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I0910 18:56:02.535243   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.535688   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.535711   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.536004   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.536215   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.570682   72122 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:56:02.571710   72122 start.go:297] selected driver: kvm2
	I0910 18:56:02.571722   72122 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.571821   72122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:56:02.572465   72122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.572528   72122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:56:02.587001   72122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:56:02.587381   72122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:56:02.587417   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:56:02.587427   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:56:02.587471   72122 start.go:340] cluster config:
	{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.587599   72122 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.589116   72122 out.go:177] * Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	I0910 18:56:02.590155   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:56:02.590185   72122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:56:02.590194   72122 cache.go:56] Caching tarball of preloaded images
	I0910 18:56:02.590294   72122 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:56:02.590313   72122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0910 18:56:02.590415   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:56:02.590612   72122 start.go:360] acquireMachinesLock for old-k8s-version-432422: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:56:08.097313   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:11.169360   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:17.249255   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:20.321326   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:26.401359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:29.473351   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:35.553474   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:38.625322   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:44.705324   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:47.777408   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:53.857373   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:56.929356   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:03.009354   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:06.081346   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:12.161342   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:15.233363   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:21.313385   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:24.385281   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:30.465347   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:33.537368   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:39.617395   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:42.689359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:48.769334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:51.841388   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:57.921359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:00.993375   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:07.073343   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:10.145433   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:16.225336   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:19.297345   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:25.377291   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:28.449365   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:34.529306   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:37.601300   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:43.681334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:46.753328   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:49.757234   71529 start.go:364] duration metric: took 4m17.481092907s to acquireMachinesLock for "no-preload-347802"
	I0910 18:58:49.757299   71529 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:58:49.757316   71529 fix.go:54] fixHost starting: 
	I0910 18:58:49.757667   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:58:49.757694   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:58:49.772681   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0910 18:58:49.773067   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:58:49.773498   71529 main.go:141] libmachine: Using API Version  1
	I0910 18:58:49.773518   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:58:49.773963   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:58:49.774127   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:58:49.774279   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 18:58:49.775704   71529 fix.go:112] recreateIfNeeded on no-preload-347802: state=Stopped err=<nil>
	I0910 18:58:49.775726   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	W0910 18:58:49.775886   71529 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:58:49.777669   71529 out.go:177] * Restarting existing kvm2 VM for "no-preload-347802" ...
	I0910 18:58:49.778739   71529 main.go:141] libmachine: (no-preload-347802) Calling .Start
	I0910 18:58:49.778882   71529 main.go:141] libmachine: (no-preload-347802) Ensuring networks are active...
	I0910 18:58:49.779509   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network default is active
	I0910 18:58:49.779824   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network mk-no-preload-347802 is active
	I0910 18:58:49.780121   71529 main.go:141] libmachine: (no-preload-347802) Getting domain xml...
	I0910 18:58:49.780766   71529 main.go:141] libmachine: (no-preload-347802) Creating domain...
	I0910 18:58:50.967405   71529 main.go:141] libmachine: (no-preload-347802) Waiting to get IP...
	I0910 18:58:50.968284   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:50.968647   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:50.968726   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:50.968628   72707 retry.go:31] will retry after 197.094328ms: waiting for machine to come up
	I0910 18:58:51.167237   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.167630   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.167683   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.167603   72707 retry.go:31] will retry after 272.376855ms: waiting for machine to come up
	I0910 18:58:51.441212   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.441673   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.441698   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.441636   72707 retry.go:31] will retry after 458.172114ms: waiting for machine to come up
	I0910 18:58:51.900991   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.901464   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.901498   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.901428   72707 retry.go:31] will retry after 442.42629ms: waiting for machine to come up
	I0910 18:58:49.754913   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:58:49.754977   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755310   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 18:58:49.755335   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755513   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 18:58:49.757052   71183 machine.go:96] duration metric: took 4m37.423474417s to provisionDockerMachine
	I0910 18:58:49.757138   71183 fix.go:56] duration metric: took 4m37.44458491s for fixHost
	I0910 18:58:49.757149   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 4m37.444613055s
	W0910 18:58:49.757173   71183 start.go:714] error starting host: provision: host is not running
	W0910 18:58:49.757263   71183 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0910 18:58:49.757273   71183 start.go:729] Will try again in 5 seconds ...
	I0910 18:58:52.345053   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:52.345519   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:52.345540   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:52.345463   72707 retry.go:31] will retry after 732.353971ms: waiting for machine to come up
	I0910 18:58:53.079229   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.079686   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.079714   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.079638   72707 retry.go:31] will retry after 658.057224ms: waiting for machine to come up
	I0910 18:58:53.739313   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.739750   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.739811   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.739732   72707 retry.go:31] will retry after 910.559952ms: waiting for machine to come up
	I0910 18:58:54.651714   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:54.652075   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:54.652099   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:54.652027   72707 retry.go:31] will retry after 1.410431493s: waiting for machine to come up
	I0910 18:58:56.063996   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:56.064396   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:56.064418   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:56.064360   72707 retry.go:31] will retry after 1.795467467s: waiting for machine to come up
	I0910 18:58:54.759533   71183 start.go:360] acquireMachinesLock for embed-certs-836868: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:58:57.862130   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:57.862484   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:57.862509   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:57.862453   72707 retry.go:31] will retry after 1.450403908s: waiting for machine to come up
	I0910 18:58:59.315197   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:59.315621   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:59.315657   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:59.315566   72707 retry.go:31] will retry after 1.81005281s: waiting for machine to come up
	I0910 18:59:01.128164   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:01.128611   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:01.128642   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:01.128563   72707 retry.go:31] will retry after 3.333505805s: waiting for machine to come up
	I0910 18:59:04.464526   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:04.465004   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:04.465030   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:04.464951   72707 retry.go:31] will retry after 3.603817331s: waiting for machine to come up
	I0910 18:59:09.257584   71627 start.go:364] duration metric: took 4m27.770499275s to acquireMachinesLock for "default-k8s-diff-port-557504"
	I0910 18:59:09.257656   71627 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:09.257673   71627 fix.go:54] fixHost starting: 
	I0910 18:59:09.258100   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:09.258144   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:09.276230   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0910 18:59:09.276622   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:09.277129   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:09.277151   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:09.277489   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:09.277663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:09.277793   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:09.279006   71627 fix.go:112] recreateIfNeeded on default-k8s-diff-port-557504: state=Stopped err=<nil>
	I0910 18:59:09.279043   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	W0910 18:59:09.279178   71627 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:09.281106   71627 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-557504" ...
	I0910 18:59:08.073057   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073638   71529 main.go:141] libmachine: (no-preload-347802) Found IP for machine: 192.168.50.138
	I0910 18:59:08.073660   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has current primary IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073666   71529 main.go:141] libmachine: (no-preload-347802) Reserving static IP address...
	I0910 18:59:08.074129   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.074153   71529 main.go:141] libmachine: (no-preload-347802) Reserved static IP address: 192.168.50.138
	I0910 18:59:08.074170   71529 main.go:141] libmachine: (no-preload-347802) DBG | skip adding static IP to network mk-no-preload-347802 - found existing host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"}
	I0910 18:59:08.074179   71529 main.go:141] libmachine: (no-preload-347802) Waiting for SSH to be available...
	I0910 18:59:08.074187   71529 main.go:141] libmachine: (no-preload-347802) DBG | Getting to WaitForSSH function...
	I0910 18:59:08.076434   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076744   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.076767   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076928   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH client type: external
	I0910 18:59:08.076950   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa (-rw-------)
	I0910 18:59:08.076979   71529 main.go:141] libmachine: (no-preload-347802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:08.076992   71529 main.go:141] libmachine: (no-preload-347802) DBG | About to run SSH command:
	I0910 18:59:08.077029   71529 main.go:141] libmachine: (no-preload-347802) DBG | exit 0
	I0910 18:59:08.201181   71529 main.go:141] libmachine: (no-preload-347802) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:08.201561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetConfigRaw
	I0910 18:59:08.202195   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.204390   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204639   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.204676   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204932   71529 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/config.json ...
	I0910 18:59:08.205227   71529 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:08.205245   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:08.205464   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.207451   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207833   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.207862   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207956   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.208120   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208402   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.208584   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.208811   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.208826   71529 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:08.317392   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:08.317421   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317693   71529 buildroot.go:166] provisioning hostname "no-preload-347802"
	I0910 18:59:08.317721   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317870   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.320440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320749   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.320777   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320922   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.321092   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321295   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.321607   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.321764   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.321778   71529 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-347802 && echo "no-preload-347802" | sudo tee /etc/hostname
	I0910 18:59:08.442907   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-347802
	
	I0910 18:59:08.442932   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.445449   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445743   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.445769   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445930   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.446135   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446308   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446461   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.446642   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.446831   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.446853   71529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-347802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-347802/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-347802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:08.561710   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:08.561738   71529 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:08.561760   71529 buildroot.go:174] setting up certificates
	I0910 18:59:08.561771   71529 provision.go:84] configureAuth start
	I0910 18:59:08.561782   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.562065   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.564917   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565296   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.565318   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565468   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.567579   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567883   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.567909   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567998   71529 provision.go:143] copyHostCerts
	I0910 18:59:08.568062   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:08.568074   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:08.568155   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:08.568259   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:08.568269   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:08.568297   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:08.568362   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:08.568369   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:08.568398   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:08.568457   71529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.no-preload-347802 san=[127.0.0.1 192.168.50.138 localhost minikube no-preload-347802]
	I0910 18:59:08.635212   71529 provision.go:177] copyRemoteCerts
	I0910 18:59:08.635296   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:08.635321   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.637851   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638202   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.638227   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638392   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.638561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.638727   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.638850   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:08.723477   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:08.747854   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 18:59:08.770184   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:08.792105   71529 provision.go:87] duration metric: took 230.324534ms to configureAuth
	I0910 18:59:08.792125   71529 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:08.792306   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:08.792389   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.795139   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795414   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.795440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795580   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.795767   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.795931   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.796075   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.796201   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.796385   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.796404   71529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:09.021498   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:09.021530   71529 machine.go:96] duration metric: took 816.290576ms to provisionDockerMachine
	I0910 18:59:09.021540   71529 start.go:293] postStartSetup for "no-preload-347802" (driver="kvm2")
	I0910 18:59:09.021566   71529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:09.021587   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.021923   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:09.021951   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.024598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.024935   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.024965   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.025210   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.025416   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.025598   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.025747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.107986   71529 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:09.111947   71529 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:09.111967   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:09.112028   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:09.112098   71529 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:09.112184   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:09.121734   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:09.144116   71529 start.go:296] duration metric: took 122.562738ms for postStartSetup
	I0910 18:59:09.144159   71529 fix.go:56] duration metric: took 19.386851685s for fixHost
	I0910 18:59:09.144183   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.146816   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147237   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.147278   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147396   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.147583   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147754   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147886   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.148060   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:09.148274   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:09.148285   71529 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:09.257433   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994749.232014074
	
	I0910 18:59:09.257456   71529 fix.go:216] guest clock: 1725994749.232014074
	I0910 18:59:09.257463   71529 fix.go:229] Guest: 2024-09-10 18:59:09.232014074 +0000 UTC Remote: 2024-09-10 18:59:09.144164668 +0000 UTC m=+277.006797443 (delta=87.849406ms)
	I0910 18:59:09.257478   71529 fix.go:200] guest clock delta is within tolerance: 87.849406ms
	I0910 18:59:09.257491   71529 start.go:83] releasing machines lock for "no-preload-347802", held for 19.50021281s
	I0910 18:59:09.257522   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.257777   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:09.260357   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260690   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.260715   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260895   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261369   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261545   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261631   71529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:09.261681   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.261749   71529 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:09.261774   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.264296   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264630   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.264650   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264907   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.264992   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.265020   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.265067   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265189   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.265266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265342   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265400   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.265470   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265602   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.367236   71529 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:09.373255   71529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:09.513271   71529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:09.519091   71529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:09.519153   71529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:09.534617   71529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:09.534639   71529 start.go:495] detecting cgroup driver to use...
	I0910 18:59:09.534698   71529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:09.551186   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:09.565123   71529 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:09.565193   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:09.578892   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:09.592571   71529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:09.700953   71529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:09.831175   71529 docker.go:233] disabling docker service ...
	I0910 18:59:09.831245   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:09.845755   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:09.858961   71529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:10.008707   71529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:10.144588   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:10.158486   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:10.176399   71529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:10.176456   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.186448   71529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:10.186511   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.196600   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.206639   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.216913   71529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:10.227030   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.237962   71529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.255181   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.265618   71529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:10.275659   71529 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:10.275713   71529 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:10.288712   71529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:10.301886   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:10.415847   71529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:10.500738   71529 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:10.500829   71529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:10.506564   71529 start.go:563] Will wait 60s for crictl version
	I0910 18:59:10.506620   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.510639   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:10.553929   71529 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:10.554034   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.582508   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.622516   71529 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:09.282182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Start
	I0910 18:59:09.282345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring networks are active...
	I0910 18:59:09.282958   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network default is active
	I0910 18:59:09.283450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network mk-default-k8s-diff-port-557504 is active
	I0910 18:59:09.283810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Getting domain xml...
	I0910 18:59:09.284454   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Creating domain...
	I0910 18:59:10.513168   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting to get IP...
	I0910 18:59:10.514173   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514681   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.514587   72843 retry.go:31] will retry after 228.672382ms: waiting for machine to come up
	I0910 18:59:10.745046   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745508   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.745440   72843 retry.go:31] will retry after 329.196616ms: waiting for machine to come up
	I0910 18:59:11.075777   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076237   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076269   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.076188   72843 retry.go:31] will retry after 317.98463ms: waiting for machine to come up
	I0910 18:59:10.623864   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:10.626709   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627042   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:10.627084   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627336   71529 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:10.631579   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:10.644077   71529 kubeadm.go:883] updating cluster {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:10.644183   71529 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:10.644215   71529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:10.679225   71529 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:10.679247   71529 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:10.679332   71529 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.679346   71529 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.679384   71529 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0910 18:59:10.679395   71529 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.679472   71529 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.679336   71529 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.681147   71529 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.681183   71529 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.681196   71529 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.681189   71529 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.681232   71529 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.681304   71529 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.841312   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.848638   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.872351   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.875581   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.882457   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.894360   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0910 18:59:10.895305   71529 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0910 18:59:10.895341   71529 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.895379   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.898460   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.953614   71529 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0910 18:59:10.953659   71529 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.953706   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042770   71529 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0910 18:59:11.042837   71529 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0910 18:59:11.042862   71529 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.042873   71529 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042820   71529 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0910 18:59:11.043065   71529 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.043097   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.129993   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.130090   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.130018   71529 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0910 18:59:11.130143   71529 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.130187   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.130189   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.130206   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.130271   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.239573   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.239626   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.241780   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.241795   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.241853   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.241883   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.360008   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.360027   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.360067   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.371623   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.480504   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0910 18:59:11.480591   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.480615   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.480635   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0910 18:59:11.480725   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:11.488248   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.510860   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0910 18:59:11.510950   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0910 18:59:11.510959   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:11.511032   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:11.514065   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0910 18:59:11.514136   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:11.555358   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0910 18:59:11.555425   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0910 18:59:11.555445   71529 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555465   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:11.555491   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555497   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0910 18:59:11.578210   71529 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0910 18:59:11.578227   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0910 18:59:11.578258   71529 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.578273   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0910 18:59:11.578306   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.578345   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0910 18:59:11.578310   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0910 18:59:11.395907   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396361   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396389   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.396320   72843 retry.go:31] will retry after 511.273215ms: waiting for machine to come up
	I0910 18:59:11.909582   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910012   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910041   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.909957   72843 retry.go:31] will retry after 712.801984ms: waiting for machine to come up
	I0910 18:59:12.624608   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625042   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625083   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:12.625014   72843 retry.go:31] will retry after 873.57855ms: waiting for machine to come up
	I0910 18:59:13.499767   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500117   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500144   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:13.500071   72843 retry.go:31] will retry after 1.180667971s: waiting for machine to come up
	I0910 18:59:14.682848   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683351   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683381   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:14.683297   72843 retry.go:31] will retry after 1.211684184s: waiting for machine to come up
	I0910 18:59:15.896172   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896651   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:15.896597   72843 retry.go:31] will retry after 1.541313035s: waiting for machine to come up
	I0910 18:59:13.534642   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978971061s)
	I0910 18:59:13.534680   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0910 18:59:13.534686   71529 ssh_runner.go:235] Completed: which crictl: (1.956359959s)
	I0910 18:59:13.534704   71529 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.534753   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:13.534754   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.580670   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.439293   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439652   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:17.439607   72843 retry.go:31] will retry after 2.232253017s: waiting for machine to come up
	I0910 18:59:19.673727   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:19.674070   72843 retry.go:31] will retry after 2.324233118s: waiting for machine to come up
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.644871938s)
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690724664s)
	I0910 18:59:17.225647   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0910 18:59:17.225671   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.225676   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:17.225702   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:19.705947   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.48021773s)
	I0910 18:59:19.705982   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0910 18:59:19.706006   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706045   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.480359026s)
	I0910 18:59:19.706069   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706098   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 18:59:19.706176   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:21.666588   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960494926s)
	I0910 18:59:21.666623   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0910 18:59:21.666640   71529 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.960446302s)
	I0910 18:59:21.666648   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:21.666666   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0910 18:59:21.666699   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:22.000591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001014   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001047   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:22.000951   72843 retry.go:31] will retry after 3.327224401s: waiting for machine to come up
	I0910 18:59:25.329967   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330414   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330445   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:25.330367   72843 retry.go:31] will retry after 3.45596573s: waiting for machine to come up
	I0910 18:59:23.216195   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.549470753s)
	I0910 18:59:23.216223   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0910 18:59:23.216243   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:23.216286   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:25.077483   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.861176975s)
	I0910 18:59:25.077515   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0910 18:59:25.077547   71529 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.077640   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.919427   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0910 18:59:25.919478   71529 cache_images.go:123] Successfully loaded all cached images
	I0910 18:59:25.919486   71529 cache_images.go:92] duration metric: took 15.240223152s to LoadCachedImages
	I0910 18:59:25.919502   71529 kubeadm.go:934] updating node { 192.168.50.138 8443 v1.31.0 crio true true} ...
	I0910 18:59:25.919622   71529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-347802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:25.919710   71529 ssh_runner.go:195] Run: crio config
	I0910 18:59:25.964461   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:25.964489   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:25.964509   71529 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:25.964535   71529 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-347802 NodeName:no-preload-347802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:25.964698   71529 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-347802"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:25.964780   71529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:25.975304   71529 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:25.975371   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:25.985124   71529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 18:59:26.003355   71529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:26.020117   71529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0910 18:59:26.037026   71529 ssh_runner.go:195] Run: grep 192.168.50.138	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:26.041140   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:26.053643   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:26.175281   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:26.193153   71529 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802 for IP: 192.168.50.138
	I0910 18:59:26.193181   71529 certs.go:194] generating shared ca certs ...
	I0910 18:59:26.193203   71529 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:26.193398   71529 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:26.193452   71529 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:26.193466   71529 certs.go:256] generating profile certs ...
	I0910 18:59:26.193582   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/client.key
	I0910 18:59:26.193664   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key.93ff3787
	I0910 18:59:26.193722   71529 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key
	I0910 18:59:26.193871   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:26.193924   71529 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:26.193978   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:26.194026   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:26.194053   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:26.194083   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:26.194132   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:26.194868   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:26.231957   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:26.280213   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:26.310722   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:26.347855   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:59:26.386495   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:26.411742   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:26.435728   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:59:26.460305   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:26.484974   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:26.508782   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:26.531397   71529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:26.548219   71529 ssh_runner.go:195] Run: openssl version
	I0910 18:59:26.553969   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:26.564950   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569539   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569594   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.575677   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:26.586342   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:26.606946   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611671   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611720   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.617271   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:26.627833   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:26.638225   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642722   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642759   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.648359   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:26.659003   71529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:26.663236   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:26.668896   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:26.674346   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:26.680028   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:26.685462   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:26.691097   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:26.696620   71529 kubeadm.go:392] StartCluster: {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:26.696704   71529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:26.696746   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.733823   71529 cri.go:89] found id: ""
	I0910 18:59:26.733883   71529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:26.744565   71529 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:26.744584   71529 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:26.744620   71529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:26.754754   71529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:26.755687   71529 kubeconfig.go:125] found "no-preload-347802" server: "https://192.168.50.138:8443"
	I0910 18:59:26.757732   71529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:26.767140   71529 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.138
	I0910 18:59:26.767167   71529 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:26.767180   71529 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:26.767235   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.805555   71529 cri.go:89] found id: ""
	I0910 18:59:26.805616   71529 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:26.822806   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:26.832434   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:26.832456   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:26.832499   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:26.841225   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:26.841288   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:26.850145   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:26.859016   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:26.859070   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:26.868806   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.877814   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:26.877867   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.886985   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:26.895859   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:26.895911   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:26.905600   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:26.915716   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:27.038963   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:30.202285   72122 start.go:364] duration metric: took 3m27.611616445s to acquireMachinesLock for "old-k8s-version-432422"
	I0910 18:59:30.202346   72122 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:30.202377   72122 fix.go:54] fixHost starting: 
	I0910 18:59:30.202807   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:30.202842   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:30.222440   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0910 18:59:30.222927   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:30.223415   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:59:30.223435   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:30.223748   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:30.223905   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:30.224034   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetState
	I0910 18:59:30.225464   72122 fix.go:112] recreateIfNeeded on old-k8s-version-432422: state=Stopped err=<nil>
	I0910 18:59:30.225505   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	W0910 18:59:30.225655   72122 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:30.227698   72122 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-432422" ...
	I0910 18:59:28.790020   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790390   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Found IP for machine: 192.168.72.54
	I0910 18:59:28.790424   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has current primary IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790435   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserving static IP address...
	I0910 18:59:28.790758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserved static IP address: 192.168.72.54
	I0910 18:59:28.790780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for SSH to be available...
	I0910 18:59:28.790811   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.790839   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | skip adding static IP to network mk-default-k8s-diff-port-557504 - found existing host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"}
	I0910 18:59:28.790856   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Getting to WaitForSSH function...
	I0910 18:59:28.792644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.792947   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.792978   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.793114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH client type: external
	I0910 18:59:28.793135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa (-rw-------)
	I0910 18:59:28.793192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:28.793242   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | About to run SSH command:
	I0910 18:59:28.793272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | exit 0
	I0910 18:59:28.921644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:28.921983   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetConfigRaw
	I0910 18:59:28.922663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:28.925273   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925614   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.925639   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925884   71627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/config.json ...
	I0910 18:59:28.926061   71627 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:28.926077   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:28.926272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:28.928411   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928731   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.928758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928909   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:28.929096   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929249   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929371   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:28.929552   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:28.929722   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:28.929732   71627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:29.041454   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:29.041486   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041745   71627 buildroot.go:166] provisioning hostname "default-k8s-diff-port-557504"
	I0910 18:59:29.041766   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041965   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.044784   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.045182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045358   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.045528   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045705   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.045968   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.046158   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.046173   71627 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-557504 && echo "default-k8s-diff-port-557504" | sudo tee /etc/hostname
	I0910 18:59:29.180227   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-557504
	
	I0910 18:59:29.180257   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.182815   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183166   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.183200   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183416   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.183612   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183779   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183883   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.184053   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.184258   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.184276   71627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-557504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-557504/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-557504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:29.315908   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:29.315942   71627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:29.315981   71627 buildroot.go:174] setting up certificates
	I0910 18:59:29.315996   71627 provision.go:84] configureAuth start
	I0910 18:59:29.316013   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.316262   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:29.319207   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319580   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.319609   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.321973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322318   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.322352   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322499   71627 provision.go:143] copyHostCerts
	I0910 18:59:29.322564   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:29.322577   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:29.322647   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:29.322772   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:29.322786   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:29.322832   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:29.322938   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:29.322951   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:29.322986   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:29.323065   71627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-557504 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-557504 localhost minikube]
	I0910 18:59:29.488131   71627 provision.go:177] copyRemoteCerts
	I0910 18:59:29.488187   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:29.488210   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.491095   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491441   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.491467   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491666   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.491830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.491973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.492123   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:29.584016   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:29.614749   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0910 18:59:29.646904   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:29.677788   71627 provision.go:87] duration metric: took 361.777725ms to configureAuth
	I0910 18:59:29.677820   71627 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:29.678048   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:29.678135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.680932   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681372   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.681394   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681674   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.681868   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682175   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.682431   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.682638   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.682665   71627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:29.934027   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:29.934058   71627 machine.go:96] duration metric: took 1.007985288s to provisionDockerMachine
	I0910 18:59:29.934071   71627 start.go:293] postStartSetup for "default-k8s-diff-port-557504" (driver="kvm2")
	I0910 18:59:29.934084   71627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:29.934104   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:29.934415   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:29.934447   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.937552   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.937917   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.937948   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.938110   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.938315   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.938496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.938645   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.030842   71627 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:30.036158   71627 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:30.036180   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:30.036267   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:30.036380   71627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:30.036520   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:30.048860   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:30.075362   71627 start.go:296] duration metric: took 141.276186ms for postStartSetup
	I0910 18:59:30.075398   71627 fix.go:56] duration metric: took 20.817735357s for fixHost
	I0910 18:59:30.075421   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.078501   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.078996   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.079026   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.079195   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.079373   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079561   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079704   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.079908   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:30.080089   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:30.080102   71627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:30.202112   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994770.178719125
	
	I0910 18:59:30.202139   71627 fix.go:216] guest clock: 1725994770.178719125
	I0910 18:59:30.202149   71627 fix.go:229] Guest: 2024-09-10 18:59:30.178719125 +0000 UTC Remote: 2024-09-10 18:59:30.075402937 +0000 UTC m=+288.723404352 (delta=103.316188ms)
	I0910 18:59:30.202175   71627 fix.go:200] guest clock delta is within tolerance: 103.316188ms
	I0910 18:59:30.202184   71627 start.go:83] releasing machines lock for "default-k8s-diff-port-557504", held for 20.944552577s
	I0910 18:59:30.202221   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.202522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:30.205728   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206068   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.206101   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206267   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.206830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207100   71627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:30.207171   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.207378   71627 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:30.207399   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.209851   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210130   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210220   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210400   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210553   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210555   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210625   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210735   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210785   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.210849   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210949   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.211002   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.211132   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.317738   71627 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:30.325333   71627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:30.485483   71627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:30.492979   71627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:30.493064   71627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:30.518974   71627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:30.518998   71627 start.go:495] detecting cgroup driver to use...
	I0910 18:59:30.519192   71627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:30.539578   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:30.554986   71627 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:30.555045   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:30.570454   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:30.590125   71627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:30.738819   71627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:30.930750   71627 docker.go:233] disabling docker service ...
	I0910 18:59:30.930811   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:30.946226   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:30.961633   71627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:31.086069   71627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:31.208629   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:31.225988   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:31.248059   71627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:31.248127   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.260212   71627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:31.260296   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.271128   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.282002   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.296901   71627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:31.309739   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.325469   71627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.350404   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.366130   71627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:31.379206   71627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:31.379259   71627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:31.395015   71627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:31.406339   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:31.538783   71627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:31.656815   71627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:31.656886   71627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:31.665263   71627 start.go:563] Will wait 60s for crictl version
	I0910 18:59:31.665333   71627 ssh_runner.go:195] Run: which crictl
	I0910 18:59:31.670317   71627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:31.719549   71627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:31.719641   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.753801   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.787092   71627 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:28.257536   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.218537615s)
	I0910 18:59:28.257562   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.451173   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.516432   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.605746   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:28.605823   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.106870   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.606340   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.623814   71529 api_server.go:72] duration metric: took 1.018071553s to wait for apiserver process to appear ...
	I0910 18:59:29.623842   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:29.623864   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:29.624282   71529 api_server.go:269] stopped: https://192.168.50.138:8443/healthz: Get "https://192.168.50.138:8443/healthz": dial tcp 192.168.50.138:8443: connect: connection refused
	I0910 18:59:30.124145   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:30.228896   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .Start
	I0910 18:59:30.229066   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring networks are active...
	I0910 18:59:30.229735   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network default is active
	I0910 18:59:30.230126   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network mk-old-k8s-version-432422 is active
	I0910 18:59:30.230559   72122 main.go:141] libmachine: (old-k8s-version-432422) Getting domain xml...
	I0910 18:59:30.231206   72122 main.go:141] libmachine: (old-k8s-version-432422) Creating domain...
	I0910 18:59:31.669616   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting to get IP...
	I0910 18:59:31.670682   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.671124   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.671225   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.671101   72995 retry.go:31] will retry after 285.109621ms: waiting for machine to come up
	I0910 18:59:31.957711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.958140   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.958169   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.958103   72995 retry.go:31] will retry after 306.703176ms: waiting for machine to come up
	I0910 18:59:32.266797   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.267299   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.267333   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.267226   72995 retry.go:31] will retry after 327.953362ms: waiting for machine to come up
	I0910 18:59:32.494151   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.494177   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.494193   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.550283   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.550317   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.624486   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.646548   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:32.646583   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.124697   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.139775   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.139814   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.623998   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.632392   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.632430   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:34.123979   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:34.133552   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 18:59:34.143511   71529 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:34.143543   71529 api_server.go:131] duration metric: took 4.519693435s to wait for apiserver health ...
	I0910 18:59:34.143552   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:34.143558   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:34.145562   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:31.788472   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:31.791698   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792063   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:31.792102   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792342   71627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:31.798045   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:31.814552   71627 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:31.814718   71627 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:31.814775   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:31.863576   71627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:31.863655   71627 ssh_runner.go:195] Run: which lz4
	I0910 18:59:31.868776   71627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:31.874162   71627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:31.874194   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 18:59:33.358271   71627 crio.go:462] duration metric: took 1.489531006s to copy over tarball
	I0910 18:59:33.358356   71627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:35.759805   71627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.401424942s)
	I0910 18:59:35.759833   71627 crio.go:469] duration metric: took 2.401529016s to extract the tarball
	I0910 18:59:35.759842   71627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:35.797349   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:35.849544   71627 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:59:35.849571   71627 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:59:35.849583   71627 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.0 crio true true} ...
	I0910 18:59:35.849706   71627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-557504 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:35.849783   71627 ssh_runner.go:195] Run: crio config
	I0910 18:59:35.896486   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:35.896514   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:35.896534   71627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:35.896556   71627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-557504 NodeName:default-k8s-diff-port-557504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:35.896707   71627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-557504"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:35.896777   71627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:35.907249   71627 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:35.907337   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:35.917196   71627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0910 18:59:35.935072   71627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:35.953823   71627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0910 18:59:35.970728   71627 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:35.974648   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:35.986487   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:36.144443   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:36.164942   71627 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504 for IP: 192.168.72.54
	I0910 18:59:36.164972   71627 certs.go:194] generating shared ca certs ...
	I0910 18:59:36.164990   71627 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:36.165172   71627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:36.165242   71627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:36.165255   71627 certs.go:256] generating profile certs ...
	I0910 18:59:36.165382   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/client.key
	I0910 18:59:36.165460   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key.5cc31a18
	I0910 18:59:36.165505   71627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key
	I0910 18:59:36.165640   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:36.165680   71627 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:36.165700   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:36.165733   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:36.165770   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:36.165803   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:36.165874   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:36.166687   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:36.203302   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:36.230599   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:36.269735   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:36.311674   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0910 18:59:36.354614   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:59:36.379082   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:34.146903   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:34.163037   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:34.189830   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:34.200702   71529 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:34.200751   71529 system_pods.go:61] "coredns-6f6b679f8f-54rpl" [2e301d43-a54a-4836-abf8-a45f5bc15889] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:34.200762   71529 system_pods.go:61] "etcd-no-preload-347802" [0fdffb97-72c6-4588-9593-46bcbed0a9fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:34.200773   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [3cf5abac-1d94-4ee2-a962-9daad308ec8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:34.200782   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6769757d-57fd-46c8-8f78-d20f80e592d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:34.200788   71529 system_pods.go:61] "kube-proxy-7v9n8" [d01842ad-3dae-49e1-8570-db9bcf4d0afc] Running
	I0910 18:59:34.200797   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [20e59c6b-4387-4dd0-b242-78d107775275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:34.200804   71529 system_pods.go:61] "metrics-server-6867b74b74-w8rqv" [52535081-4503-4136-963d-6b2db6c0224e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:34.200809   71529 system_pods.go:61] "storage-provisioner" [9f7c0178-7194-4c73-95a4-5a3c0091f3ac] Running
	I0910 18:59:34.200816   71529 system_pods.go:74] duration metric: took 10.965409ms to wait for pod list to return data ...
	I0910 18:59:34.200857   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:34.204544   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:34.204568   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:34.204580   71529 node_conditions.go:105] duration metric: took 3.714534ms to run NodePressure ...
	I0910 18:59:34.204597   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:34.487106   71529 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491817   71529 kubeadm.go:739] kubelet initialised
	I0910 18:59:34.491838   71529 kubeadm.go:740] duration metric: took 4.708046ms waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491845   71529 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:34.496604   71529 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.501535   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501553   71529 pod_ready.go:82] duration metric: took 4.927724ms for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.501561   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501567   71529 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.505473   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505491   71529 pod_ready.go:82] duration metric: took 3.917111ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.505499   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505507   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.510025   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510043   71529 pod_ready.go:82] duration metric: took 4.522609ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.510050   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510056   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:36.519023   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:32.597017   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.597589   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.597616   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.597554   72995 retry.go:31] will retry after 448.654363ms: waiting for machine to come up
	I0910 18:59:33.048100   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.048559   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.048590   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.048478   72995 retry.go:31] will retry after 654.829574ms: waiting for machine to come up
	I0910 18:59:33.704902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.705446   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.705475   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.705363   72995 retry.go:31] will retry after 610.514078ms: waiting for machine to come up
	I0910 18:59:34.316978   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:34.317481   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:34.317503   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:34.317430   72995 retry.go:31] will retry after 1.125805817s: waiting for machine to come up
	I0910 18:59:35.444880   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:35.445369   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:35.445394   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:35.445312   72995 retry.go:31] will retry after 1.484426931s: waiting for machine to come up
	I0910 18:59:36.931028   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:36.931568   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:36.931613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:36.931524   72995 retry.go:31] will retry after 1.819998768s: waiting for machine to come up
	I0910 18:59:36.403353   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:36.427345   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:36.452765   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:36.485795   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:36.512944   71627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:36.532454   71627 ssh_runner.go:195] Run: openssl version
	I0910 18:59:36.538449   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:36.550806   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555761   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555819   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.562430   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:36.573730   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:36.584987   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589551   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589615   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.595496   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:36.607821   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:36.620298   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624888   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624939   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.630534   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:36.641657   71627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:36.646317   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:36.652748   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:36.661166   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:36.670240   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:36.676776   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:36.686442   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:36.693233   71627 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:36.693351   71627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:36.693414   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.743159   71627 cri.go:89] found id: ""
	I0910 18:59:36.743256   71627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:36.754428   71627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:36.754451   71627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:36.754505   71627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:36.765126   71627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:36.766213   71627 kubeconfig.go:125] found "default-k8s-diff-port-557504" server: "https://192.168.72.54:8444"
	I0910 18:59:36.768428   71627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:36.778678   71627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I0910 18:59:36.778715   71627 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:36.778728   71627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:36.778779   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.824031   71627 cri.go:89] found id: ""
	I0910 18:59:36.824107   71627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:36.840585   71627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:36.851445   71627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:36.851462   71627 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:36.851508   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0910 18:59:36.860630   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:36.860682   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:36.869973   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0910 18:59:36.880034   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:36.880099   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:36.889684   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.898786   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:36.898870   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.908328   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0910 18:59:36.917272   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:36.917334   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:36.928923   71627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:36.940238   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.079143   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.945317   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.157807   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.245283   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.353653   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:38.353746   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:38.854791   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.354743   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.409511   71627 api_server.go:72] duration metric: took 1.055855393s to wait for apiserver process to appear ...
	I0910 18:59:39.409543   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:39.409566   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.410104   71627 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I0910 18:59:39.909665   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.018802   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:41.517911   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:38.753463   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:38.754076   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:38.754107   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:38.754019   72995 retry.go:31] will retry after 2.258214375s: waiting for machine to come up
	I0910 18:59:41.013524   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:41.013988   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:41.014011   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:41.013910   72995 retry.go:31] will retry after 2.030553777s: waiting for machine to come up
	I0910 18:59:41.976133   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:41.976166   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:41.976179   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.080631   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.080674   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.409865   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.421093   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.421174   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.910272   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.914729   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.914757   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:43.410280   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:43.414731   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 18:59:43.421135   71627 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:43.421163   71627 api_server.go:131] duration metric: took 4.011612782s to wait for apiserver health ...
	I0910 18:59:43.421172   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:43.421178   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:43.423063   71627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:43.424278   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:43.434823   71627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:43.461604   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:43.477566   71627 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:43.477592   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:43.477600   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:43.477606   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:43.477616   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:43.477623   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 18:59:43.477631   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:43.477638   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:43.477648   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 18:59:43.477658   71627 system_pods.go:74] duration metric: took 16.035701ms to wait for pod list to return data ...
	I0910 18:59:43.477673   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:43.485818   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:43.485840   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:43.485850   71627 node_conditions.go:105] duration metric: took 8.173642ms to run NodePressure ...
	I0910 18:59:43.485864   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:43.752422   71627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756713   71627 kubeadm.go:739] kubelet initialised
	I0910 18:59:43.756735   71627 kubeadm.go:740] duration metric: took 4.285787ms waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756744   71627 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:43.762384   71627 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.767080   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767099   71627 pod_ready.go:82] duration metric: took 4.695864ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.767109   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767116   71627 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.772560   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772579   71627 pod_ready.go:82] duration metric: took 5.453737ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.772588   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772593   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.776328   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776345   71627 pod_ready.go:82] duration metric: took 3.745149ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.776352   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776357   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.865825   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865850   71627 pod_ready.go:82] duration metric: took 89.48636ms for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.865862   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865868   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.264892   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264922   71627 pod_ready.go:82] duration metric: took 399.047611ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.264932   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264938   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.665376   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665402   71627 pod_ready.go:82] duration metric: took 400.457184ms for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.665413   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665418   71627 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:45.065696   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065724   71627 pod_ready.go:82] duration metric: took 400.298527ms for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:45.065736   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065743   71627 pod_ready.go:39] duration metric: took 1.308988307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:45.065759   71627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:59:45.077813   71627 ops.go:34] apiserver oom_adj: -16
	I0910 18:59:45.077838   71627 kubeadm.go:597] duration metric: took 8.323378955s to restartPrimaryControlPlane
	I0910 18:59:45.077846   71627 kubeadm.go:394] duration metric: took 8.384626167s to StartCluster
	I0910 18:59:45.077860   71627 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.077980   71627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:45.079979   71627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.080304   71627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 18:59:45.080399   71627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 18:59:45.080478   71627 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080510   71627 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080506   71627 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-557504"
	W0910 18:59:45.080523   71627 addons.go:243] addon storage-provisioner should already be in state true
	I0910 18:59:45.080519   71627 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080553   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080568   71627 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080568   71627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-557504"
	W0910 18:59:45.080582   71627 addons.go:243] addon metrics-server should already be in state true
	I0910 18:59:45.080529   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:45.080608   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080906   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080932   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.080989   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080994   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081905   71627 out.go:177] * Verifying Kubernetes components...
	I0910 18:59:45.083206   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:45.096019   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0910 18:59:45.096288   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0910 18:59:45.096453   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096730   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096984   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097012   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097243   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097273   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097401   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.097596   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.097678   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0910 18:59:45.097693   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.098049   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.098464   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.098504   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.099185   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.099207   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.099592   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.100125   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.100166   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.101159   71627 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-557504"
	W0910 18:59:45.101175   71627 addons.go:243] addon default-storageclass should already be in state true
	I0910 18:59:45.101203   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.101501   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.101537   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.114823   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0910 18:59:45.115253   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.115363   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0910 18:59:45.115737   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.115759   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.115795   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.116106   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.116244   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.116270   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.116289   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.116696   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.117290   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.117327   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.117546   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0910 18:59:45.117879   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.118496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.118631   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.118643   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.118949   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.119107   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.120353   71627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 18:59:45.120775   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.121685   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 18:59:45.121699   71627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 18:59:45.121718   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.122500   71627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:45.123762   71627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.123778   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:59:45.123792   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.125345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.125926   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.126161   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.126357   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.125943   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.126548   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.126661   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.127075   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127507   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.127522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127675   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.127810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.127905   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.127997   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.132978   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0910 18:59:45.133303   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.133757   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.133779   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.134043   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.134188   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.135712   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.135917   71627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.135928   71627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:59:45.135938   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.138375   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138616   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.138629   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138768   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.138937   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.139054   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.139181   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.293036   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:45.311747   71627 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:45.425820   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 18:59:45.425852   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 18:59:45.430783   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.441452   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.481245   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 18:59:45.481268   71627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 18:59:45.573348   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:45.573373   71627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 18:59:45.634830   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:46.589194   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147713188s)
	I0910 18:59:46.589253   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589266   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589284   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589311   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589321   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158508631s)
	I0910 18:59:46.589343   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589355   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589723   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589729   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589730   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589736   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589738   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589741   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589751   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589752   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589761   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589774   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589816   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589755   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589852   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589961   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589971   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.590192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.590207   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.590220   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591675   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.591692   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591702   71627 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:46.595906   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.595921   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.596105   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.596126   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.598033   71627 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0910 18:59:44.023282   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:46.516768   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:47.016400   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.016423   71529 pod_ready.go:82] duration metric: took 12.506359172s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.016435   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020809   71529 pod_ready.go:93] pod "kube-proxy-7v9n8" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.020827   71529 pod_ready.go:82] duration metric: took 4.386051ms for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020836   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.046937   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:43.047363   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:43.047393   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:43.047314   72995 retry.go:31] will retry after 2.233047134s: waiting for machine to come up
	I0910 18:59:45.282610   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:45.283104   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:45.283133   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:45.283026   72995 retry.go:31] will retry after 4.238676711s: waiting for machine to come up
	I0910 18:59:51.182133   71183 start.go:364] duration metric: took 56.422548201s to acquireMachinesLock for "embed-certs-836868"
	I0910 18:59:51.182195   71183 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:51.182206   71183 fix.go:54] fixHost starting: 
	I0910 18:59:51.182600   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:51.182637   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:51.198943   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0910 18:59:51.199345   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:51.199803   71183 main.go:141] libmachine: Using API Version  1
	I0910 18:59:51.199828   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:51.200153   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:51.200364   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 18:59:51.200493   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 18:59:51.202100   71183 fix.go:112] recreateIfNeeded on embed-certs-836868: state=Stopped err=<nil>
	I0910 18:59:51.202123   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	W0910 18:59:51.202286   71183 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:51.204028   71183 out.go:177] * Restarting existing kvm2 VM for "embed-certs-836868" ...
	I0910 18:59:46.599125   71627 addons.go:510] duration metric: took 1.518742666s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0910 18:59:47.316003   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.316691   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.027374   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:49.027393   71529 pod_ready.go:82] duration metric: took 2.006551523s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:49.027403   71529 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:51.034568   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:51.205180   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Start
	I0910 18:59:51.205332   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring networks are active...
	I0910 18:59:51.205952   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network default is active
	I0910 18:59:51.206322   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network mk-embed-certs-836868 is active
	I0910 18:59:51.206717   71183 main.go:141] libmachine: (embed-certs-836868) Getting domain xml...
	I0910 18:59:51.207430   71183 main.go:141] libmachine: (embed-certs-836868) Creating domain...
	I0910 18:59:49.526000   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.526536   72122 main.go:141] libmachine: (old-k8s-version-432422) Found IP for machine: 192.168.61.51
	I0910 18:59:49.526558   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserving static IP address...
	I0910 18:59:49.526569   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has current primary IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.527018   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.527063   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | skip adding static IP to network mk-old-k8s-version-432422 - found existing host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"}
	I0910 18:59:49.527084   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserved static IP address: 192.168.61.51
	I0910 18:59:49.527099   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting for SSH to be available...
	I0910 18:59:49.527113   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Getting to WaitForSSH function...
	I0910 18:59:49.529544   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.529962   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.529987   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.530143   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH client type: external
	I0910 18:59:49.530170   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa (-rw-------)
	I0910 18:59:49.530195   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:49.530208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | About to run SSH command:
	I0910 18:59:49.530245   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | exit 0
	I0910 18:59:49.656944   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:49.657307   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:59:49.657926   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:49.660332   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660689   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.660711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660992   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:59:49.661238   72122 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:49.661259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:49.661480   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.663824   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.664236   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664370   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.664565   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664712   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664887   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.665103   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.665392   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.665406   72122 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:49.769433   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:49.769468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769716   72122 buildroot.go:166] provisioning hostname "old-k8s-version-432422"
	I0910 18:59:49.769740   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769918   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.772324   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772710   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.772736   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772875   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.773061   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773245   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773384   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.773554   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.773751   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.773764   72122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-432422 && echo "old-k8s-version-432422" | sudo tee /etc/hostname
	I0910 18:59:49.891230   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-432422
	
	I0910 18:59:49.891259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.894272   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894641   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.894683   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894820   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.894983   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895210   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.895330   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.895540   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.895559   72122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-432422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-432422/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-432422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:50.011767   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:50.011795   72122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:50.011843   72122 buildroot.go:174] setting up certificates
	I0910 18:59:50.011854   72122 provision.go:84] configureAuth start
	I0910 18:59:50.011866   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:50.012185   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:50.014947   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015352   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.015388   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015549   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.017712   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018002   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.018036   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018193   72122 provision.go:143] copyHostCerts
	I0910 18:59:50.018251   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:50.018265   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:50.018337   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:50.018481   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:50.018491   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:50.018513   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:50.018585   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:50.018594   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:50.018612   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:50.018667   72122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-432422 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-432422]
	I0910 18:59:50.528798   72122 provision.go:177] copyRemoteCerts
	I0910 18:59:50.528864   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:50.528900   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.532154   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532576   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.532613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532765   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.532995   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.533205   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.533370   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:50.620169   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0910 18:59:50.647163   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:50.679214   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:50.704333   72122 provision.go:87] duration metric: took 692.46607ms to configureAuth
	I0910 18:59:50.704360   72122 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:50.704545   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:59:50.704639   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.707529   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.707903   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.707931   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.708082   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.708297   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708463   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708641   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.708786   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:50.708954   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:50.708969   72122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:50.935375   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:50.935403   72122 machine.go:96] duration metric: took 1.274152353s to provisionDockerMachine
	I0910 18:59:50.935414   72122 start.go:293] postStartSetup for "old-k8s-version-432422" (driver="kvm2")
	I0910 18:59:50.935424   72122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:50.935448   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:50.935763   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:50.935796   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.938507   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.938865   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.938902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.939008   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.939198   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.939529   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.939689   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.024726   72122 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:51.029522   72122 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:51.029547   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:51.029632   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:51.029734   72122 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:51.029848   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:51.042454   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:51.068748   72122 start.go:296] duration metric: took 133.318275ms for postStartSetup
	I0910 18:59:51.068792   72122 fix.go:56] duration metric: took 20.866428313s for fixHost
	I0910 18:59:51.068816   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.071533   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.071894   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.071921   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.072072   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.072264   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072616   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.072784   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:51.072938   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:51.072948   72122 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:51.181996   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994791.151610055
	
	I0910 18:59:51.182016   72122 fix.go:216] guest clock: 1725994791.151610055
	I0910 18:59:51.182024   72122 fix.go:229] Guest: 2024-09-10 18:59:51.151610055 +0000 UTC Remote: 2024-09-10 18:59:51.068796263 +0000 UTC m=+228.614166738 (delta=82.813792ms)
	I0910 18:59:51.182048   72122 fix.go:200] guest clock delta is within tolerance: 82.813792ms
	I0910 18:59:51.182055   72122 start.go:83] releasing machines lock for "old-k8s-version-432422", held for 20.979733564s
	I0910 18:59:51.182094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.182331   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:51.184857   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185183   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.185212   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185346   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.185840   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186006   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186079   72122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:51.186143   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.186215   72122 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:51.186238   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.189304   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189674   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.189698   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189765   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189879   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190057   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190212   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190230   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.190255   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.190358   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.190470   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190652   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190817   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190948   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.296968   72122 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:51.303144   72122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:51.447027   72122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:51.454963   72122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:51.455032   72122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:51.474857   72122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:51.474882   72122 start.go:495] detecting cgroup driver to use...
	I0910 18:59:51.474957   72122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:51.490457   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:51.504502   72122 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:51.504569   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:51.523331   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:51.543438   72122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:51.678734   72122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:51.831736   72122 docker.go:233] disabling docker service ...
	I0910 18:59:51.831804   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:51.846805   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:51.865771   72122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:52.012922   72122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:52.161595   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:52.180034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:52.200984   72122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0910 18:59:52.201041   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.211927   72122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:52.211989   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.223601   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.234211   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.246209   72122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:52.264079   72122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:52.277144   72122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:52.277204   72122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:52.292683   72122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:52.304601   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:52.421971   72122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:52.544386   72122 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:52.544459   72122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:52.551436   72122 start.go:563] Will wait 60s for crictl version
	I0910 18:59:52.551487   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:52.555614   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:52.598031   72122 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:52.598128   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.629578   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.662403   72122 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0910 18:59:51.815436   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:52.816775   71627 node_ready.go:49] node "default-k8s-diff-port-557504" has status "Ready":"True"
	I0910 18:59:52.816809   71627 node_ready.go:38] duration metric: took 7.505015999s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:52.816821   71627 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:52.823528   71627 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829667   71627 pod_ready.go:93] pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.829688   71627 pod_ready.go:82] duration metric: took 6.135159ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829696   71627 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833912   71627 pod_ready.go:93] pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.833933   71627 pod_ready.go:82] duration metric: took 4.231672ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833942   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838863   71627 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.838883   71627 pod_ready.go:82] duration metric: took 4.934379ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838897   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851413   71627 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:53.851437   71627 pod_ready.go:82] duration metric: took 1.012531075s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851447   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020886   71627 pod_ready.go:93] pod "kube-proxy-4t8r9" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:54.020910   71627 pod_ready.go:82] duration metric: took 169.456474ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020926   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217416   71627 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:55.217440   71627 pod_ready.go:82] duration metric: took 1.196506075s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217451   71627 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.036769   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:55.536544   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:52.544041   71183 main.go:141] libmachine: (embed-certs-836868) Waiting to get IP...
	I0910 18:59:52.545001   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.545522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.545586   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.545494   73202 retry.go:31] will retry after 260.451431ms: waiting for machine to come up
	I0910 18:59:52.807914   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.808351   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.808377   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.808307   73202 retry.go:31] will retry after 340.526757ms: waiting for machine to come up
	I0910 18:59:53.150854   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.151446   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.151476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.151404   73202 retry.go:31] will retry after 470.620322ms: waiting for machine to come up
	I0910 18:59:53.624169   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.624709   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.624747   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.624657   73202 retry.go:31] will retry after 529.186273ms: waiting for machine to come up
	I0910 18:59:54.155156   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.155644   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.155673   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.155599   73202 retry.go:31] will retry after 575.877001ms: waiting for machine to come up
	I0910 18:59:54.733522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.734049   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.734092   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.734000   73202 retry.go:31] will retry after 577.385946ms: waiting for machine to come up
	I0910 18:59:55.312705   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:55.313087   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:55.313114   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:55.313059   73202 retry.go:31] will retry after 735.788809ms: waiting for machine to come up
	I0910 18:59:56.049771   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:56.050272   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:56.050306   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:56.050224   73202 retry.go:31] will retry after 1.433431053s: waiting for machine to come up
	I0910 18:59:52.663465   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:52.666401   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.666796   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:52.666843   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.667002   72122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:52.672338   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:52.688427   72122 kubeadm.go:883] updating cluster {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:52.688559   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:59:52.688623   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:52.740370   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:52.740447   72122 ssh_runner.go:195] Run: which lz4
	I0910 18:59:52.744925   72122 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:52.749840   72122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:52.749872   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0910 18:59:54.437031   72122 crio.go:462] duration metric: took 1.692132914s to copy over tarball
	I0910 18:59:54.437124   72122 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:57.462705   72122 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025545297s)
	I0910 18:59:57.462743   72122 crio.go:469] duration metric: took 3.025690485s to extract the tarball
	I0910 18:59:57.462753   72122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:57.223959   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:59.224657   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:01.224783   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:58.035610   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:00.535779   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:57.485417   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:57.485870   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:57.485896   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:57.485815   73202 retry.go:31] will retry after 1.638565814s: waiting for machine to come up
	I0910 18:59:59.126134   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:59.126625   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:59.126657   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:59.126576   73202 retry.go:31] will retry after 2.127929201s: waiting for machine to come up
	I0910 19:00:01.256121   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:01.256665   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:01.256694   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:01.256612   73202 retry.go:31] will retry after 2.530100505s: waiting for machine to come up
	I0910 18:59:57.508817   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:57.551327   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:57.551350   72122 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:57.551434   72122 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.551704   72122 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.551776   72122 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.552000   72122 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.551807   72122 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.551846   72122 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.551714   72122 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0910 18:59:57.551917   72122 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.553642   72122 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.553660   72122 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.553917   72122 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.553935   72122 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 18:59:57.554014   72122 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.554160   72122 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.554376   72122 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.554662   72122 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.726191   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.742799   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.745264   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.753214   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.768122   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.770828   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 18:59:57.774835   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.807657   72122 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0910 18:59:57.807693   72122 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.807733   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908662   72122 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0910 18:59:57.908678   72122 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0910 18:59:57.908707   72122 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.908711   72122 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.908759   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908760   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920214   72122 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0910 18:59:57.920248   72122 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0910 18:59:57.920258   72122 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.920280   72122 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.920304   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920313   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.937914   72122 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0910 18:59:57.937952   72122 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.937958   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.937999   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.938033   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.938006   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.938073   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.938063   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.938157   72122 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0910 18:59:57.938185   72122 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0910 18:59:57.938215   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:58.044082   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.044139   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.044146   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.044173   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.045813   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.045816   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.045849   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.198804   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.198841   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.198881   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.198944   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.198978   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.199000   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.199081   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.353153   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0910 18:59:58.353217   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0910 18:59:58.353232   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0910 18:59:58.353277   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0910 18:59:58.359353   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.359363   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0910 18:59:58.359421   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.386872   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:58.407734   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0910 18:59:58.425479   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0910 18:59:58.553340   72122 cache_images.go:92] duration metric: took 1.001972084s to LoadCachedImages
	W0910 18:59:58.553438   72122 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0910 18:59:58.553455   72122 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0910 18:59:58.553634   72122 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-432422 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:58.553722   72122 ssh_runner.go:195] Run: crio config
	I0910 18:59:58.605518   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:59:58.605542   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:58.605554   72122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:58.605577   72122 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-432422 NodeName:old-k8s-version-432422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 18:59:58.605744   72122 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-432422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:58.605814   72122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0910 18:59:58.618033   72122 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:58.618096   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:58.629175   72122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0910 18:59:58.653830   72122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:58.679797   72122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0910 18:59:58.698692   72122 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:58.702565   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:58.715128   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:58.858262   72122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:58.876681   72122 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422 for IP: 192.168.61.51
	I0910 18:59:58.876719   72122 certs.go:194] generating shared ca certs ...
	I0910 18:59:58.876740   72122 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:58.876921   72122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:58.876983   72122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:58.876996   72122 certs.go:256] generating profile certs ...
	I0910 18:59:58.877129   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key
	I0910 18:59:58.877210   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b
	I0910 18:59:58.877264   72122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key
	I0910 18:59:58.877424   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:58.877473   72122 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:58.877491   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:58.877528   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:58.877560   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:58.877591   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:58.877648   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:58.878410   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:58.936013   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:58.969736   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:59.017414   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:59.063599   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 18:59:59.093934   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:59.138026   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:59.166507   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:59.196972   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:59.223596   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:59.250627   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:59.279886   72122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:59.300491   72122 ssh_runner.go:195] Run: openssl version
	I0910 18:59:59.306521   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:59.317238   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321625   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321682   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.327532   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:59.339028   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:59.350578   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355025   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355106   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.360701   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:59.375040   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:59.389867   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395829   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395890   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.402425   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:59.414077   72122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:59.418909   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:59.425061   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:59.431213   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:59.437581   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:59.443603   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:59.449820   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:59.456100   72122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:59.456189   72122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:59.456234   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.497167   72122 cri.go:89] found id: ""
	I0910 18:59:59.497227   72122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:59.508449   72122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:59.508474   72122 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:59.508527   72122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:59.521416   72122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:59.522489   72122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:59.523125   72122 kubeconfig.go:62] /home/jenkins/minikube-integration/19598-5973/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-432422" cluster setting kubeconfig missing "old-k8s-version-432422" context setting]
	I0910 18:59:59.524107   72122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:59.637793   72122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:59.651879   72122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0910 18:59:59.651916   72122 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:59.651930   72122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:59.651989   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.691857   72122 cri.go:89] found id: ""
	I0910 18:59:59.691922   72122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:59.708610   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:59.718680   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:59.718702   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:59.718755   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:59.729965   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:59.730028   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:59.740037   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:59.750640   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:59.750706   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:59.762436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.773456   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:59.773522   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.783438   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:59.792996   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:59.793056   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:59.805000   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:59.815384   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:59.955068   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:00.842403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.102530   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.212897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.340128   72122 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:01.340217   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:01.841004   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:02.340913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.225898   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.723882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.034295   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.034431   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.790275   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:03.790710   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:03.790736   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:03.790662   73202 retry.go:31] will retry after 3.202952028s: waiting for machine to come up
	I0910 19:00:06.995302   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:06.996124   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:06.996149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:06.996073   73202 retry.go:31] will retry after 3.076425277s: waiting for machine to come up
	I0910 19:00:02.840935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.340938   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.840669   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.341213   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.841274   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.340698   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.841152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.340425   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.841001   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.341198   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.724121   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.223744   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:07.533428   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:09.534830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.033655   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.075125   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075606   71183 main.go:141] libmachine: (embed-certs-836868) Found IP for machine: 192.168.39.107
	I0910 19:00:10.075634   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has current primary IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075643   71183 main.go:141] libmachine: (embed-certs-836868) Reserving static IP address...
	I0910 19:00:10.076046   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.076075   71183 main.go:141] libmachine: (embed-certs-836868) DBG | skip adding static IP to network mk-embed-certs-836868 - found existing host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"}
	I0910 19:00:10.076103   71183 main.go:141] libmachine: (embed-certs-836868) Reserved static IP address: 192.168.39.107
	I0910 19:00:10.076122   71183 main.go:141] libmachine: (embed-certs-836868) Waiting for SSH to be available...
	I0910 19:00:10.076133   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Getting to WaitForSSH function...
	I0910 19:00:10.078039   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078327   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.078352   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078452   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH client type: external
	I0910 19:00:10.078475   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa (-rw-------)
	I0910 19:00:10.078514   71183 main.go:141] libmachine: (embed-certs-836868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 19:00:10.078527   71183 main.go:141] libmachine: (embed-certs-836868) DBG | About to run SSH command:
	I0910 19:00:10.078548   71183 main.go:141] libmachine: (embed-certs-836868) DBG | exit 0
	I0910 19:00:10.201403   71183 main.go:141] libmachine: (embed-certs-836868) DBG | SSH cmd err, output: <nil>: 
	I0910 19:00:10.201748   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetConfigRaw
	I0910 19:00:10.202405   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.204760   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205130   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.205160   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205408   71183 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/config.json ...
	I0910 19:00:10.205697   71183 machine.go:93] provisionDockerMachine start ...
	I0910 19:00:10.205714   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.205924   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.208095   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208394   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.208418   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208534   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.208712   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208856   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208958   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.209193   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.209412   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.209427   71183 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:00:10.313247   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:00:10.313278   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313556   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 19:00:10.313584   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313765   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.316135   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316569   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.316592   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316739   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.316893   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317046   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317165   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.317288   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.317490   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.317506   71183 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-836868 && echo "embed-certs-836868" | sudo tee /etc/hostname
	I0910 19:00:10.433585   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-836868
	
	I0910 19:00:10.433608   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.436076   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436407   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.436440   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.436826   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.436972   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.437146   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.437314   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.437480   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.437495   71183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-836868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-836868/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-836868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:00:10.546105   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:00:10.546146   71183 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 19:00:10.546186   71183 buildroot.go:174] setting up certificates
	I0910 19:00:10.546197   71183 provision.go:84] configureAuth start
	I0910 19:00:10.546214   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.546485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.549236   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549567   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.549594   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549696   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.551807   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552162   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.552195   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552326   71183 provision.go:143] copyHostCerts
	I0910 19:00:10.552370   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 19:00:10.552380   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 19:00:10.552435   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 19:00:10.552559   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 19:00:10.552568   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 19:00:10.552588   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 19:00:10.552646   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 19:00:10.552653   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 19:00:10.552669   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 19:00:10.552714   71183 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.embed-certs-836868 san=[127.0.0.1 192.168.39.107 embed-certs-836868 localhost minikube]
	I0910 19:00:10.610073   71183 provision.go:177] copyRemoteCerts
	I0910 19:00:10.610132   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:00:10.610153   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.612881   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613264   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.613301   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.613695   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.613863   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.613980   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:10.695479   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 19:00:10.719380   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0910 19:00:10.744099   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:00:10.767849   71183 provision.go:87] duration metric: took 221.638443ms to configureAuth
	I0910 19:00:10.767873   71183 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:00:10.768065   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:10.768150   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.770831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.771178   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771338   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.771539   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771702   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771825   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.771952   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.772106   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.772120   71183 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 19:00:10.992528   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 19:00:10.992568   71183 machine.go:96] duration metric: took 786.857321ms to provisionDockerMachine
	I0910 19:00:10.992583   71183 start.go:293] postStartSetup for "embed-certs-836868" (driver="kvm2")
	I0910 19:00:10.992598   71183 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:00:10.992630   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.992999   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:00:10.993030   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.995361   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995745   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.995777   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995925   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.996100   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.996212   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.996375   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.079205   71183 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:00:11.083998   71183 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:00:11.084028   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 19:00:11.084089   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 19:00:11.084158   71183 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 19:00:11.084241   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:00:11.093150   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:11.116894   71183 start.go:296] duration metric: took 124.294668ms for postStartSetup
	I0910 19:00:11.116938   71183 fix.go:56] duration metric: took 19.934731446s for fixHost
	I0910 19:00:11.116962   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.119482   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119784   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.119821   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.120176   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120331   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120501   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.120645   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:11.120868   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:11.120883   71183 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:00:11.217542   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994811.172877822
	
	I0910 19:00:11.217570   71183 fix.go:216] guest clock: 1725994811.172877822
	I0910 19:00:11.217577   71183 fix.go:229] Guest: 2024-09-10 19:00:11.172877822 +0000 UTC Remote: 2024-09-10 19:00:11.116943488 +0000 UTC m=+358.948412200 (delta=55.934334ms)
	I0910 19:00:11.217603   71183 fix.go:200] guest clock delta is within tolerance: 55.934334ms
	I0910 19:00:11.217607   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 20.035440196s
	I0910 19:00:11.217627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.217861   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:11.220855   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221282   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.221313   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221533   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222074   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222277   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222354   71183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 19:00:11.222402   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.222528   71183 ssh_runner.go:195] Run: cat /version.json
	I0910 19:00:11.222570   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.225205   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.225565   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225581   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225753   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.225934   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226035   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.226062   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.226109   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226207   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.226283   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.226370   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226535   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226668   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.297642   71183 ssh_runner.go:195] Run: systemctl --version
	I0910 19:00:11.322486   71183 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 19:00:11.470402   71183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 19:00:11.477843   71183 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:00:11.477903   71183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:00:11.495518   71183 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:00:11.495542   71183 start.go:495] detecting cgroup driver to use...
	I0910 19:00:11.495597   71183 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:00:11.512467   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:00:11.526665   71183 docker.go:217] disabling cri-docker service (if available) ...
	I0910 19:00:11.526732   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 19:00:11.540445   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 19:00:11.554386   71183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 19:00:11.682012   71183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 19:00:11.846239   71183 docker.go:233] disabling docker service ...
	I0910 19:00:11.846303   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 19:00:11.860981   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 19:00:11.874271   71183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 19:00:12.005716   71183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 19:00:12.137151   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 19:00:12.151156   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:00:12.170086   71183 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 19:00:12.170150   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.180741   71183 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 19:00:12.180804   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.190933   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.200885   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:07.840772   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.341153   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.840737   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.340471   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.840262   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.340827   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.840645   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.340524   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.840521   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.340560   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.210950   71183 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:00:12.221730   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.232931   71183 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.251318   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.261473   71183 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:00:12.270818   71183 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 19:00:12.270873   71183 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 19:00:12.284581   71183 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:00:12.294214   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:12.424646   71183 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 19:00:12.517553   71183 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 19:00:12.517633   71183 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 19:00:12.522728   71183 start.go:563] Will wait 60s for crictl version
	I0910 19:00:12.522775   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:00:12.526754   71183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:00:12.569377   71183 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 19:00:12.569454   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.597783   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.632619   71183 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 19:00:12.725298   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:15.223906   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:14.035868   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:16.534058   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.633800   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:12.637104   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637447   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:12.637476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637684   71183 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 19:00:12.641996   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:12.654577   71183 kubeadm.go:883] updating cluster {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:00:12.654684   71183 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:00:12.654737   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:12.694585   71183 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 19:00:12.694644   71183 ssh_runner.go:195] Run: which lz4
	I0910 19:00:12.699764   71183 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 19:00:12.705406   71183 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:00:12.705437   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 19:00:14.054131   71183 crio.go:462] duration metric: took 1.354391682s to copy over tarball
	I0910 19:00:14.054206   71183 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 19:00:16.114941   71183 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.06070257s)
	I0910 19:00:16.114968   71183 crio.go:469] duration metric: took 2.060808083s to extract the tarball
	I0910 19:00:16.114978   71183 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 19:00:16.153934   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:16.199988   71183 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 19:00:16.200008   71183 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:00:16.200015   71183 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.0 crio true true} ...
	I0910 19:00:16.200109   71183 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-836868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:00:16.200168   71183 ssh_runner.go:195] Run: crio config
	I0910 19:00:16.249409   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:16.249430   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:16.249443   71183 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 19:00:16.249462   71183 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-836868 NodeName:embed-certs-836868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:00:16.249596   71183 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-836868"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:00:16.249652   71183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:00:16.265984   71183 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:00:16.266062   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:00:16.276007   71183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0910 19:00:16.291971   71183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:00:16.307712   71183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0910 19:00:16.323789   71183 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0910 19:00:16.327478   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:16.339545   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:16.470249   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:16.487798   71183 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868 for IP: 192.168.39.107
	I0910 19:00:16.487838   71183 certs.go:194] generating shared ca certs ...
	I0910 19:00:16.487858   71183 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:16.488058   71183 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 19:00:16.488110   71183 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 19:00:16.488124   71183 certs.go:256] generating profile certs ...
	I0910 19:00:16.488243   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/client.key
	I0910 19:00:16.488307   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key.04acd22a
	I0910 19:00:16.488355   71183 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key
	I0910 19:00:16.488507   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 19:00:16.488547   71183 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 19:00:16.488560   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 19:00:16.488593   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 19:00:16.488633   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 19:00:16.488669   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 19:00:16.488856   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:16.489528   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:00:16.529980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 19:00:16.568653   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:00:16.593924   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 19:00:16.628058   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0910 19:00:16.669209   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:00:16.693274   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:00:16.716323   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:00:16.740155   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:00:16.763908   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 19:00:16.787980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 19:00:16.811754   71183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:00:16.828151   71183 ssh_runner.go:195] Run: openssl version
	I0910 19:00:16.834095   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:00:16.845376   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850178   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850230   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.856507   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:00:16.868105   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 19:00:16.879950   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884778   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884823   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.890715   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 19:00:16.903523   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 19:00:16.914585   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919105   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919151   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.924965   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:00:16.935579   71183 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:00:16.939895   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:00:16.945595   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:00:16.951247   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:00:16.956938   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:00:16.962908   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:00:16.968664   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:00:16.974624   71183 kubeadm.go:392] StartCluster: {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:00:16.974725   71183 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 19:00:16.974778   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.012869   71183 cri.go:89] found id: ""
	I0910 19:00:17.012947   71183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:00:17.023781   71183 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 19:00:17.023798   71183 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 19:00:17.023846   71183 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 19:00:17.034549   71183 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:00:17.035566   71183 kubeconfig.go:125] found "embed-certs-836868" server: "https://192.168.39.107:8443"
	I0910 19:00:17.037751   71183 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:00:17.047667   71183 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.107
	I0910 19:00:17.047696   71183 kubeadm.go:1160] stopping kube-system containers ...
	I0910 19:00:17.047708   71183 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 19:00:17.047747   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.083130   71183 cri.go:89] found id: ""
	I0910 19:00:17.083200   71183 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 19:00:17.101035   71183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:00:17.111335   71183 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:00:17.111357   71183 kubeadm.go:157] found existing configuration files:
	
	I0910 19:00:17.111414   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:00:17.120543   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:00:17.120593   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:00:17.130938   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:00:17.140688   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:00:17.140747   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:00:17.150637   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.160483   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:00:17.160520   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.170417   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:00:17.179778   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:00:17.179827   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:00:17.189197   71183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:00:17.199264   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:12.841060   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.340347   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.841136   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.840913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.341205   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.840692   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.340839   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.841050   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.341340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.224985   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:19.231248   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:18.534658   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:20.534807   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:17.309791   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.257162   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.482216   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.555094   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.645089   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:18.645178   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.146266   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.645546   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.146275   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.645291   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.662158   71183 api_server.go:72] duration metric: took 2.017082575s to wait for apiserver process to appear ...
	I0910 19:00:20.662183   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:00:20.662204   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:17.840510   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.340821   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.841156   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.340316   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.840339   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.341140   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.841333   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.340342   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.840282   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:22.340361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.326005   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.326036   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.326048   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.346004   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.346035   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.662353   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.669314   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:23.669344   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.162975   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.170262   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:24.170298   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.662865   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.667320   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:00:24.674393   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:00:24.674418   71183 api_server.go:131] duration metric: took 4.01222766s to wait for apiserver health ...
	I0910 19:00:24.674427   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:24.674433   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:24.676229   71183 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:00:24.677519   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:00:24.692951   71183 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:00:24.718355   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:00:24.732731   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:00:24.732758   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:00:24.732764   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:00:24.732775   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:00:24.732781   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:00:24.732798   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 19:00:24.732808   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:00:24.732817   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:00:24.732823   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:00:24.732835   71183 system_pods.go:74] duration metric: took 14.459216ms to wait for pod list to return data ...
	I0910 19:00:24.732846   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:00:24.742472   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:00:24.742497   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:00:24.742507   71183 node_conditions.go:105] duration metric: took 9.657853ms to run NodePressure ...
	I0910 19:00:24.742523   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:25.021719   71183 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026163   71183 kubeadm.go:739] kubelet initialised
	I0910 19:00:25.026187   71183 kubeadm.go:740] duration metric: took 4.442058ms waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026196   71183 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:25.030895   71183 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.035021   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035044   71183 pod_ready.go:82] duration metric: took 4.12756ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.035055   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035064   71183 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.039362   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039381   71183 pod_ready.go:82] duration metric: took 4.309293ms for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.039389   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039394   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.049142   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049164   71183 pod_ready.go:82] duration metric: took 9.762471ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.049175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049182   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.122255   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122285   71183 pod_ready.go:82] duration metric: took 73.09407ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.122295   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122301   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.522122   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522160   71183 pod_ready.go:82] duration metric: took 399.850787ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.522175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522185   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.921918   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921947   71183 pod_ready.go:82] duration metric: took 399.75274ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.921956   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921962   71183 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:26.322195   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322219   71183 pod_ready.go:82] duration metric: took 400.248825ms for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:26.322228   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322235   71183 pod_ready.go:39] duration metric: took 1.296028669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:26.322251   71183 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:00:26.333796   71183 ops.go:34] apiserver oom_adj: -16
	I0910 19:00:26.333824   71183 kubeadm.go:597] duration metric: took 9.310018521s to restartPrimaryControlPlane
	I0910 19:00:26.333834   71183 kubeadm.go:394] duration metric: took 9.359219145s to StartCluster
	I0910 19:00:26.333850   71183 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.333920   71183 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:00:26.336496   71183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.336792   71183 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:00:26.336863   71183 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:00:26.336935   71183 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-836868"
	I0910 19:00:26.336969   71183 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-836868"
	W0910 19:00:26.336980   71183 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:00:26.336995   71183 addons.go:69] Setting default-storageclass=true in profile "embed-certs-836868"
	I0910 19:00:26.337050   71183 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-836868"
	I0910 19:00:26.337058   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:26.337050   71183 addons.go:69] Setting metrics-server=true in profile "embed-certs-836868"
	I0910 19:00:26.337011   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337146   71183 addons.go:234] Setting addon metrics-server=true in "embed-certs-836868"
	W0910 19:00:26.337165   71183 addons.go:243] addon metrics-server should already be in state true
	I0910 19:00:26.337234   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337501   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337547   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337552   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337583   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337638   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337677   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.339741   71183 out.go:177] * Verifying Kubernetes components...
	I0910 19:00:26.341792   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:26.354154   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0910 19:00:26.354750   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.355345   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.355379   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.355756   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.356316   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0910 19:00:26.356389   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.356428   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.356508   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35641
	I0910 19:00:26.356810   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.356893   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.357384   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.357411   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361164   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.361278   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.361302   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361363   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.361709   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.362446   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.362483   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.364762   71183 addons.go:234] Setting addon default-storageclass=true in "embed-certs-836868"
	W0910 19:00:26.364786   71183 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:00:26.364814   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.365165   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.365230   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.379158   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0910 19:00:26.379696   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.380235   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.380266   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.380654   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.380865   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.382030   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0910 19:00:26.382358   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.382892   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.382912   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.382928   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.383271   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.383441   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.385129   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.385171   71183 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:00:26.385687   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0910 19:00:26.386001   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.386217   71183 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:00:21.723833   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.724422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.724456   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.034262   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.035125   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:26.386227   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:00:26.386289   71183 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:00:26.386309   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.386518   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.386533   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.386931   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.387566   71183 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.387651   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:00:26.387672   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.387618   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.387760   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.389782   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.389941   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.390190   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.390263   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.390558   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.390744   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.390921   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.391058   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.391585   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391788   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.391941   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.392097   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.392256   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.404601   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0910 19:00:26.405167   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.406097   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.406655   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.407006   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.407163   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.409223   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.409437   71183 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.409454   71183 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:00:26.409470   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.412388   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.412812   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.412831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.413010   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.413177   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.413333   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.413474   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.533906   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:26.552203   71183 node_ready.go:35] waiting up to 6m0s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:26.687774   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:00:26.687804   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:00:26.690124   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.737647   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:00:26.737673   71183 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:00:26.739650   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.783096   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:26.783125   71183 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:00:26.828766   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:22.841048   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.341180   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.841325   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.340485   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.841340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.340935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.840886   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.340826   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.840344   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.341189   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.844896   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154733205s)
	I0910 19:00:27.844931   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105250764s)
	I0910 19:00:27.844944   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844969   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844979   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.844980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845406   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845420   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845434   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845446   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.845464   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.845471   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845702   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845733   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845747   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847084   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847101   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847110   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.847118   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.847308   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847323   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.852938   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.852956   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.853198   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.853219   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.853224   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.879527   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.05071539s)
	I0910 19:00:27.879577   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.879597   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880030   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880050   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880059   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.880081   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880381   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880405   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880416   71183 addons.go:475] Verifying addon metrics-server=true in "embed-certs-836868"
	I0910 19:00:27.880383   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.883034   71183 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:00:28.222881   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.223636   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.034633   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.884243   71183 addons.go:510] duration metric: took 1.547392632s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:00:28.556786   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:31.055519   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:27.840306   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.340657   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.841179   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.340881   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.840957   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.341260   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.841151   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.840360   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.341199   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.724435   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:35.223194   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.533611   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:34.534941   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.034007   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:33.056381   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:34.056156   71183 node_ready.go:49] node "embed-certs-836868" has status "Ready":"True"
	I0910 19:00:34.056191   71183 node_ready.go:38] duration metric: took 7.503955102s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:34.056200   71183 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:34.063331   71183 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068294   71183 pod_ready.go:93] pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:34.068322   71183 pod_ready.go:82] duration metric: took 4.96275ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068335   71183 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:36.077798   71183 pod_ready.go:103] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.841192   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.340518   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.840995   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.341016   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.840480   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.340647   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.840416   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.340921   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.340956   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.224065   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.723852   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.533725   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.534430   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.576189   71183 pod_ready.go:93] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.576218   71183 pod_ready.go:82] duration metric: took 3.507872898s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.576238   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582150   71183 pod_ready.go:93] pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.582167   71183 pod_ready.go:82] duration metric: took 5.921544ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582175   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586941   71183 pod_ready.go:93] pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.586956   71183 pod_ready.go:82] duration metric: took 4.774648ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586963   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591829   71183 pod_ready.go:93] pod "kube-proxy-4fddv" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.591846   71183 pod_ready.go:82] duration metric: took 4.876938ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591854   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657930   71183 pod_ready.go:93] pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.657952   71183 pod_ready.go:82] duration metric: took 66.092785ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657962   71183 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:39.665465   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.841210   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.341302   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.340558   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.840395   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.341022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.841093   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.341228   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.841103   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.340329   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.223446   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.223533   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.224840   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.033565   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.034402   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.164336   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.164983   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:42.841000   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.341147   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.840534   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.340988   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.340859   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.840877   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.841175   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:47.341064   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.722930   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.723539   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.036816   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.534367   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.667433   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:51.164114   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:47.841037   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.341204   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.840961   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.340679   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.841173   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.340751   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.841158   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.340999   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.840349   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.340383   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.723945   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.224168   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.034234   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.533690   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.164294   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.666369   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:52.840991   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.340439   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.840487   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.340407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.840619   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.340844   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.841190   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.340927   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.724247   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.223715   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:58.033639   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.034297   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.670234   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.164278   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.164755   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.840798   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.340905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.841330   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.340743   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.840256   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.340970   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.840732   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:01.340927   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:01.341014   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:01.378922   72122 cri.go:89] found id: ""
	I0910 19:01:01.378953   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.378964   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:01.378971   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:01.379032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:01.413274   72122 cri.go:89] found id: ""
	I0910 19:01:01.413302   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.413313   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:01.413320   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:01.413383   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:01.449165   72122 cri.go:89] found id: ""
	I0910 19:01:01.449204   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.449215   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:01.449221   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:01.449291   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:01.484627   72122 cri.go:89] found id: ""
	I0910 19:01:01.484650   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.484657   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:01.484663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:01.484720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:01.519332   72122 cri.go:89] found id: ""
	I0910 19:01:01.519357   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.519364   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:01.519370   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:01.519424   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:01.554080   72122 cri.go:89] found id: ""
	I0910 19:01:01.554102   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.554109   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:01.554114   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:01.554160   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:01.590100   72122 cri.go:89] found id: ""
	I0910 19:01:01.590131   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.590143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:01.590149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:01.590208   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:01.623007   72122 cri.go:89] found id: ""
	I0910 19:01:01.623034   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.623045   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:01.623055   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:01.623070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:01.679940   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:01.679971   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:01.694183   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:01.694218   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:01.826997   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:01.827025   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:01.827038   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:01.903885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:01.903926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:02.224039   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.224422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.533395   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:05.034075   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.665680   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:06.665874   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.450792   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:04.471427   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:04.471501   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:04.521450   72122 cri.go:89] found id: ""
	I0910 19:01:04.521484   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.521494   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:04.521503   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:04.521562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:04.577588   72122 cri.go:89] found id: ""
	I0910 19:01:04.577622   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.577633   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:04.577641   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:04.577707   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:04.615558   72122 cri.go:89] found id: ""
	I0910 19:01:04.615586   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.615594   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:04.615599   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:04.615652   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:04.655763   72122 cri.go:89] found id: ""
	I0910 19:01:04.655793   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.655806   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:04.655815   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:04.655881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:04.692620   72122 cri.go:89] found id: ""
	I0910 19:01:04.692642   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.692649   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:04.692654   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:04.692709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:04.730575   72122 cri.go:89] found id: ""
	I0910 19:01:04.730601   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.730611   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:04.730616   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:04.730665   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:04.766716   72122 cri.go:89] found id: ""
	I0910 19:01:04.766742   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.766749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:04.766754   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:04.766799   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:04.808122   72122 cri.go:89] found id: ""
	I0910 19:01:04.808151   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.808162   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:04.808173   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:04.808185   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:04.858563   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:04.858592   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:04.872323   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:04.872350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:04.942541   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:04.942571   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:04.942588   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:05.022303   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:05.022338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:06.723760   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:08.724550   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.223094   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.533060   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.534466   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:12.034244   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.163526   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.164502   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.562092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:07.575254   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:07.575308   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:07.616583   72122 cri.go:89] found id: ""
	I0910 19:01:07.616607   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.616615   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:07.616620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:07.616676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:07.654676   72122 cri.go:89] found id: ""
	I0910 19:01:07.654700   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.654711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:07.654718   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:07.654790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:07.690054   72122 cri.go:89] found id: ""
	I0910 19:01:07.690085   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.690096   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:07.690104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:07.690171   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:07.724273   72122 cri.go:89] found id: ""
	I0910 19:01:07.724295   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.724302   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:07.724307   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:07.724363   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:07.757621   72122 cri.go:89] found id: ""
	I0910 19:01:07.757646   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.757654   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:07.757660   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:07.757716   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:07.791502   72122 cri.go:89] found id: ""
	I0910 19:01:07.791533   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.791543   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:07.791557   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:07.791620   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:07.825542   72122 cri.go:89] found id: ""
	I0910 19:01:07.825577   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.825586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:07.825592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:07.825649   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:07.862278   72122 cri.go:89] found id: ""
	I0910 19:01:07.862303   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.862312   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:07.862320   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:07.862331   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:07.952016   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:07.952059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:07.997004   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:07.997034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:08.047745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:08.047783   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:08.064712   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:08.064736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:08.136822   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:10.637017   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:10.650113   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:10.650198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:10.687477   72122 cri.go:89] found id: ""
	I0910 19:01:10.687504   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.687513   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:10.687520   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:10.687594   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:10.721410   72122 cri.go:89] found id: ""
	I0910 19:01:10.721437   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.721447   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:10.721455   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:10.721514   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:10.757303   72122 cri.go:89] found id: ""
	I0910 19:01:10.757330   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.757338   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:10.757343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:10.757396   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:10.794761   72122 cri.go:89] found id: ""
	I0910 19:01:10.794788   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.794799   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:10.794806   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:10.794885   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:10.828631   72122 cri.go:89] found id: ""
	I0910 19:01:10.828657   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.828668   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:10.828675   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:10.828737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:10.863609   72122 cri.go:89] found id: ""
	I0910 19:01:10.863634   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.863641   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:10.863646   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:10.863734   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:10.899299   72122 cri.go:89] found id: ""
	I0910 19:01:10.899324   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.899335   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:10.899342   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:10.899403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:10.939233   72122 cri.go:89] found id: ""
	I0910 19:01:10.939259   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.939268   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:10.939277   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:10.939290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:10.976599   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:10.976627   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:11.029099   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:11.029144   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:11.045401   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:11.045426   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:11.119658   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:11.119679   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:11.119696   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:13.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.723673   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:14.034325   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:16.534463   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.663847   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.664387   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.698696   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:13.712317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:13.712386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:13.747442   72122 cri.go:89] found id: ""
	I0910 19:01:13.747470   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.747480   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:13.747487   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:13.747555   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:13.782984   72122 cri.go:89] found id: ""
	I0910 19:01:13.783008   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.783015   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:13.783021   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:13.783078   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:13.820221   72122 cri.go:89] found id: ""
	I0910 19:01:13.820245   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.820256   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:13.820262   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:13.820322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:13.854021   72122 cri.go:89] found id: ""
	I0910 19:01:13.854056   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.854068   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:13.854075   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:13.854138   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:13.888292   72122 cri.go:89] found id: ""
	I0910 19:01:13.888321   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.888331   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:13.888338   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:13.888398   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:13.922301   72122 cri.go:89] found id: ""
	I0910 19:01:13.922330   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.922341   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:13.922349   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:13.922408   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:13.959977   72122 cri.go:89] found id: ""
	I0910 19:01:13.960002   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.960010   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:13.960015   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:13.960074   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:13.995255   72122 cri.go:89] found id: ""
	I0910 19:01:13.995282   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.995293   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:13.995308   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:13.995323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:14.050760   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:14.050790   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:14.064694   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:14.064723   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:14.137406   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:14.137431   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:14.137447   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:14.216624   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:14.216657   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:16.765643   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:16.778746   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:16.778821   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:16.814967   72122 cri.go:89] found id: ""
	I0910 19:01:16.814999   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.815010   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:16.815017   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:16.815073   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:16.850306   72122 cri.go:89] found id: ""
	I0910 19:01:16.850334   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.850345   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:16.850352   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:16.850413   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:16.886104   72122 cri.go:89] found id: ""
	I0910 19:01:16.886134   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.886144   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:16.886152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:16.886218   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:16.921940   72122 cri.go:89] found id: ""
	I0910 19:01:16.921968   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.921977   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:16.921983   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:16.922032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:16.956132   72122 cri.go:89] found id: ""
	I0910 19:01:16.956166   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.956177   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:16.956185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:16.956247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:16.988240   72122 cri.go:89] found id: ""
	I0910 19:01:16.988269   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.988278   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:16.988284   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:16.988330   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:17.022252   72122 cri.go:89] found id: ""
	I0910 19:01:17.022281   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.022291   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:17.022297   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:17.022364   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:17.058664   72122 cri.go:89] found id: ""
	I0910 19:01:17.058693   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.058703   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:17.058715   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:17.058740   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:17.136927   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:17.136964   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:17.189427   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:17.189457   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:17.242193   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:17.242225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:17.257878   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:17.257908   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:17.330096   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:17.724465   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.224230   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:18.534806   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:21.034368   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:17.667897   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.165174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.165421   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:19.831030   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:19.844516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:19.844581   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:19.879878   72122 cri.go:89] found id: ""
	I0910 19:01:19.879908   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.879919   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:19.879927   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:19.879988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:19.915992   72122 cri.go:89] found id: ""
	I0910 19:01:19.916018   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.916025   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:19.916030   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:19.916084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:19.949206   72122 cri.go:89] found id: ""
	I0910 19:01:19.949232   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.949242   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:19.949249   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:19.949311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:19.983011   72122 cri.go:89] found id: ""
	I0910 19:01:19.983035   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.983043   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:19.983048   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:19.983096   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:20.018372   72122 cri.go:89] found id: ""
	I0910 19:01:20.018394   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.018402   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:20.018408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:20.018466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:20.053941   72122 cri.go:89] found id: ""
	I0910 19:01:20.053967   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.053975   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:20.053980   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:20.054037   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:20.084999   72122 cri.go:89] found id: ""
	I0910 19:01:20.085026   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.085035   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:20.085042   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:20.085115   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:20.124036   72122 cri.go:89] found id: ""
	I0910 19:01:20.124063   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.124072   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:20.124086   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:20.124103   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:20.176917   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:20.176944   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:20.190831   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:20.190852   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:20.257921   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:20.257942   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:20.257954   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:20.335320   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:20.335350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:22.723788   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.223765   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:23.034456   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.534821   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:24.663208   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:26.664282   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.875167   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:22.888803   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:22.888858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:22.922224   72122 cri.go:89] found id: ""
	I0910 19:01:22.922252   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.922264   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:22.922270   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:22.922328   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:22.959502   72122 cri.go:89] found id: ""
	I0910 19:01:22.959536   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.959546   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:22.959553   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:22.959619   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:22.992914   72122 cri.go:89] found id: ""
	I0910 19:01:22.992944   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.992955   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:22.992962   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:22.993022   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:23.028342   72122 cri.go:89] found id: ""
	I0910 19:01:23.028367   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.028376   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:23.028384   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:23.028443   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:23.064715   72122 cri.go:89] found id: ""
	I0910 19:01:23.064742   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.064753   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:23.064761   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:23.064819   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:23.100752   72122 cri.go:89] found id: ""
	I0910 19:01:23.100781   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.100789   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:23.100795   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:23.100857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:23.136017   72122 cri.go:89] found id: ""
	I0910 19:01:23.136045   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.136055   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:23.136062   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:23.136108   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:23.170787   72122 cri.go:89] found id: ""
	I0910 19:01:23.170811   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.170819   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:23.170826   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:23.170840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:23.210031   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:23.210059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:23.261525   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:23.261557   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:23.275611   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:23.275636   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:23.348543   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:23.348568   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:23.348582   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:25.929406   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:25.942658   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:25.942737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:25.977231   72122 cri.go:89] found id: ""
	I0910 19:01:25.977260   72122 logs.go:276] 0 containers: []
	W0910 19:01:25.977270   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:25.977277   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:25.977336   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:26.015060   72122 cri.go:89] found id: ""
	I0910 19:01:26.015093   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.015103   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:26.015110   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:26.015180   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:26.053618   72122 cri.go:89] found id: ""
	I0910 19:01:26.053643   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.053651   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:26.053656   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:26.053712   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:26.090489   72122 cri.go:89] found id: ""
	I0910 19:01:26.090515   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.090523   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:26.090529   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:26.090587   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:26.126687   72122 cri.go:89] found id: ""
	I0910 19:01:26.126710   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.126718   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:26.126723   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:26.126771   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:26.160901   72122 cri.go:89] found id: ""
	I0910 19:01:26.160939   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.160951   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:26.160959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:26.161017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:26.195703   72122 cri.go:89] found id: ""
	I0910 19:01:26.195728   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.195737   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:26.195743   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:26.195794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:26.230394   72122 cri.go:89] found id: ""
	I0910 19:01:26.230414   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.230422   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:26.230430   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:26.230444   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:26.296884   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:26.296905   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:26.296926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:26.371536   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:26.371569   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:26.412926   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:26.412958   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:26.462521   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:26.462550   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:27.725957   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.224312   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.034338   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.034794   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.035284   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.668205   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:31.166271   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.976550   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:28.989517   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:28.989586   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:29.025638   72122 cri.go:89] found id: ""
	I0910 19:01:29.025662   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.025671   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:29.025677   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:29.025719   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:29.067473   72122 cri.go:89] found id: ""
	I0910 19:01:29.067495   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.067502   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:29.067507   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:29.067556   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:29.105587   72122 cri.go:89] found id: ""
	I0910 19:01:29.105616   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.105628   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:29.105635   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:29.105696   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:29.142427   72122 cri.go:89] found id: ""
	I0910 19:01:29.142458   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.142468   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:29.142474   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:29.142530   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:29.178553   72122 cri.go:89] found id: ""
	I0910 19:01:29.178575   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.178582   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:29.178587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:29.178638   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:29.212997   72122 cri.go:89] found id: ""
	I0910 19:01:29.213025   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.213034   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:29.213040   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:29.213109   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:29.247057   72122 cri.go:89] found id: ""
	I0910 19:01:29.247083   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.247091   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:29.247097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:29.247151   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:29.285042   72122 cri.go:89] found id: ""
	I0910 19:01:29.285084   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.285096   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:29.285107   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:29.285131   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:29.336003   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:29.336033   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:29.349867   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:29.349890   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:29.422006   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:29.422028   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:29.422043   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:29.504047   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:29.504079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:32.050723   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:32.063851   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:32.063904   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:32.100816   72122 cri.go:89] found id: ""
	I0910 19:01:32.100841   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.100851   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:32.100858   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:32.100924   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:32.134863   72122 cri.go:89] found id: ""
	I0910 19:01:32.134892   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.134902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:32.134909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:32.134967   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:32.169873   72122 cri.go:89] found id: ""
	I0910 19:01:32.169901   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.169912   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:32.169919   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:32.169973   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:32.202161   72122 cri.go:89] found id: ""
	I0910 19:01:32.202187   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.202197   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:32.202204   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:32.202264   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:32.236850   72122 cri.go:89] found id: ""
	I0910 19:01:32.236879   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.236888   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:32.236896   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:32.236957   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:32.271479   72122 cri.go:89] found id: ""
	I0910 19:01:32.271511   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.271530   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:32.271542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:32.271701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:32.306724   72122 cri.go:89] found id: ""
	I0910 19:01:32.306747   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.306754   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:32.306760   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:32.306811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:32.341153   72122 cri.go:89] found id: ""
	I0910 19:01:32.341184   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.341195   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:32.341206   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:32.341221   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:32.393087   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:32.393122   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:32.406565   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:32.406591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:32.478030   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:32.478048   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:32.478079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:32.224371   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.723372   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.533510   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:37.033933   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:33.671725   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:36.165396   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.568440   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:32.568478   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:35.112022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:35.125210   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:35.125286   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:35.160716   72122 cri.go:89] found id: ""
	I0910 19:01:35.160743   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.160753   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:35.160759   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:35.160817   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:35.196500   72122 cri.go:89] found id: ""
	I0910 19:01:35.196530   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.196541   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:35.196548   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:35.196622   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:35.232476   72122 cri.go:89] found id: ""
	I0910 19:01:35.232510   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.232521   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:35.232528   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:35.232590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:35.269612   72122 cri.go:89] found id: ""
	I0910 19:01:35.269635   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.269644   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:35.269649   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:35.269697   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:35.307368   72122 cri.go:89] found id: ""
	I0910 19:01:35.307393   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.307401   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:35.307408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:35.307475   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:35.342079   72122 cri.go:89] found id: ""
	I0910 19:01:35.342108   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.342119   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:35.342126   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:35.342188   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:35.379732   72122 cri.go:89] found id: ""
	I0910 19:01:35.379761   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.379771   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:35.379778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:35.379840   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:35.419067   72122 cri.go:89] found id: ""
	I0910 19:01:35.419098   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.419109   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:35.419120   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:35.419139   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:35.472459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:35.472494   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:35.487044   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:35.487078   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:35.565242   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:35.565264   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:35.565282   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:35.645918   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:35.645951   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:36.724330   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.724368   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.224272   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:39.533968   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.534579   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.666059   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.164158   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.189238   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:38.203973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:38.204035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:38.241402   72122 cri.go:89] found id: ""
	I0910 19:01:38.241428   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.241438   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:38.241446   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:38.241506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:38.280657   72122 cri.go:89] found id: ""
	I0910 19:01:38.280685   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.280693   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:38.280698   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:38.280753   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:38.319697   72122 cri.go:89] found id: ""
	I0910 19:01:38.319725   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.319735   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:38.319742   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:38.319804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:38.356766   72122 cri.go:89] found id: ""
	I0910 19:01:38.356799   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.356810   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:38.356817   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:38.356876   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:38.395468   72122 cri.go:89] found id: ""
	I0910 19:01:38.395497   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.395508   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:38.395516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:38.395577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:38.434942   72122 cri.go:89] found id: ""
	I0910 19:01:38.434965   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.434974   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:38.434979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:38.435025   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:38.470687   72122 cri.go:89] found id: ""
	I0910 19:01:38.470715   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.470724   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:38.470729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:38.470777   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:38.505363   72122 cri.go:89] found id: ""
	I0910 19:01:38.505394   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.505405   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:38.505417   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:38.505432   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:38.557735   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:38.557770   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:38.586094   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:38.586128   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:38.665190   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:38.665215   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:38.665231   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:38.743748   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:38.743779   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:41.284310   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:41.299086   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:41.299157   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:41.340453   72122 cri.go:89] found id: ""
	I0910 19:01:41.340476   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.340484   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:41.340489   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:41.340544   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:41.374028   72122 cri.go:89] found id: ""
	I0910 19:01:41.374052   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.374060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:41.374066   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:41.374117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:41.413888   72122 cri.go:89] found id: ""
	I0910 19:01:41.413915   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.413929   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:41.413935   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:41.413994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:41.450846   72122 cri.go:89] found id: ""
	I0910 19:01:41.450873   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.450883   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:41.450890   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:41.450950   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:41.484080   72122 cri.go:89] found id: ""
	I0910 19:01:41.484107   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.484115   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:41.484120   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:41.484168   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:41.523652   72122 cri.go:89] found id: ""
	I0910 19:01:41.523677   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.523685   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:41.523690   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:41.523749   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:41.563690   72122 cri.go:89] found id: ""
	I0910 19:01:41.563715   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.563727   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:41.563734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:41.563797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:41.602101   72122 cri.go:89] found id: ""
	I0910 19:01:41.602122   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.602130   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:41.602137   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:41.602152   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:41.655459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:41.655488   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:41.670037   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:41.670062   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:41.741399   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:41.741417   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:41.741428   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:41.817411   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:41.817445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:43.726285   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.223867   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.034404   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.533246   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:43.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.164675   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.363631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:44.378279   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:44.378344   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:44.412450   72122 cri.go:89] found id: ""
	I0910 19:01:44.412486   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.412495   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:44.412502   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:44.412569   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:44.448378   72122 cri.go:89] found id: ""
	I0910 19:01:44.448407   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.448415   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:44.448420   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:44.448470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:44.483478   72122 cri.go:89] found id: ""
	I0910 19:01:44.483516   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.483524   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:44.483530   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:44.483584   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:44.517787   72122 cri.go:89] found id: ""
	I0910 19:01:44.517812   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.517822   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:44.517828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:44.517886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:44.554909   72122 cri.go:89] found id: ""
	I0910 19:01:44.554939   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.554950   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:44.554957   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:44.555018   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:44.589865   72122 cri.go:89] found id: ""
	I0910 19:01:44.589890   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.589909   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:44.589923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:44.589968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:44.626712   72122 cri.go:89] found id: ""
	I0910 19:01:44.626739   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.626749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:44.626756   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:44.626815   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:44.664985   72122 cri.go:89] found id: ""
	I0910 19:01:44.665067   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.665103   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:44.665114   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:44.665165   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:44.721160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:44.721196   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:44.735339   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:44.735366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:44.810056   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:44.810080   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:44.810094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:44.898822   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:44.898871   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:47.438440   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:47.451438   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:47.451506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:48.723661   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.723768   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.534671   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:51.033397   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.164739   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.665165   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:47.491703   72122 cri.go:89] found id: ""
	I0910 19:01:47.491729   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.491740   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:47.491747   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:47.491811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:47.526834   72122 cri.go:89] found id: ""
	I0910 19:01:47.526862   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.526874   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:47.526880   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:47.526940   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:47.570463   72122 cri.go:89] found id: ""
	I0910 19:01:47.570488   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.570496   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:47.570503   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:47.570545   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:47.608691   72122 cri.go:89] found id: ""
	I0910 19:01:47.608715   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.608727   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:47.608734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:47.608780   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:47.648279   72122 cri.go:89] found id: ""
	I0910 19:01:47.648308   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.648316   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:47.648324   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:47.648386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:47.684861   72122 cri.go:89] found id: ""
	I0910 19:01:47.684885   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.684892   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:47.684897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:47.684947   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:47.721004   72122 cri.go:89] found id: ""
	I0910 19:01:47.721037   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.721049   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:47.721056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:47.721134   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:47.756154   72122 cri.go:89] found id: ""
	I0910 19:01:47.756181   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.756192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:47.756202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:47.756217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:47.806860   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:47.806889   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:47.822419   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:47.822445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:47.891966   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:47.891986   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:47.892000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:47.978510   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:47.978561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.519264   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:50.533576   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:50.533630   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:50.567574   72122 cri.go:89] found id: ""
	I0910 19:01:50.567601   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.567612   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:50.567619   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:50.567678   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:50.608824   72122 cri.go:89] found id: ""
	I0910 19:01:50.608850   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.608858   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:50.608863   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:50.608939   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:50.644502   72122 cri.go:89] found id: ""
	I0910 19:01:50.644530   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.644538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:50.644544   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:50.644590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:50.682309   72122 cri.go:89] found id: ""
	I0910 19:01:50.682332   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.682340   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:50.682345   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:50.682404   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:50.735372   72122 cri.go:89] found id: ""
	I0910 19:01:50.735398   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.735410   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:50.735418   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:50.735482   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:50.786364   72122 cri.go:89] found id: ""
	I0910 19:01:50.786391   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.786401   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:50.786408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:50.786464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:50.831525   72122 cri.go:89] found id: ""
	I0910 19:01:50.831564   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.831575   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:50.831582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:50.831645   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:50.873457   72122 cri.go:89] found id: ""
	I0910 19:01:50.873482   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.873493   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:50.873503   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:50.873524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:50.956032   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:50.956069   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.996871   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:50.996904   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:51.047799   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:51.047824   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:51.061946   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:51.061970   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:51.136302   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:53.222492   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.223835   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.034478   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.532623   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:52.665991   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.164343   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.636464   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:53.649971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:53.650054   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:53.688172   72122 cri.go:89] found id: ""
	I0910 19:01:53.688201   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.688211   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:53.688217   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:53.688274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:53.725094   72122 cri.go:89] found id: ""
	I0910 19:01:53.725119   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.725128   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:53.725135   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:53.725196   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:53.763866   72122 cri.go:89] found id: ""
	I0910 19:01:53.763893   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.763907   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:53.763914   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:53.763971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:53.797760   72122 cri.go:89] found id: ""
	I0910 19:01:53.797787   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.797798   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:53.797805   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:53.797862   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:53.830305   72122 cri.go:89] found id: ""
	I0910 19:01:53.830332   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.830340   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:53.830346   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:53.830402   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:53.861970   72122 cri.go:89] found id: ""
	I0910 19:01:53.861995   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.862003   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:53.862009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:53.862059   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:53.896577   72122 cri.go:89] found id: ""
	I0910 19:01:53.896600   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.896609   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:53.896614   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:53.896660   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:53.935051   72122 cri.go:89] found id: ""
	I0910 19:01:53.935077   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.935086   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:53.935094   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:53.935105   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:53.950252   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:53.950276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:54.023327   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:54.023346   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:54.023361   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:54.101605   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:54.101643   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:54.142906   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:54.142930   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:56.697701   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:56.717755   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:56.717836   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:56.763564   72122 cri.go:89] found id: ""
	I0910 19:01:56.763594   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.763606   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:56.763613   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:56.763675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:56.815780   72122 cri.go:89] found id: ""
	I0910 19:01:56.815808   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.815816   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:56.815821   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:56.815883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:56.848983   72122 cri.go:89] found id: ""
	I0910 19:01:56.849013   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.849024   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:56.849032   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:56.849100   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:56.880660   72122 cri.go:89] found id: ""
	I0910 19:01:56.880690   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.880702   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:56.880709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:56.880756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:56.922836   72122 cri.go:89] found id: ""
	I0910 19:01:56.922860   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.922867   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:56.922873   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:56.922938   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:56.963474   72122 cri.go:89] found id: ""
	I0910 19:01:56.963505   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.963517   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:56.963524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:56.963585   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:56.996837   72122 cri.go:89] found id: ""
	I0910 19:01:56.996864   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.996872   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:56.996877   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:56.996925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:57.029594   72122 cri.go:89] found id: ""
	I0910 19:01:57.029629   72122 logs.go:276] 0 containers: []
	W0910 19:01:57.029640   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:57.029651   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:57.029664   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:57.083745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:57.083772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:57.099269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:57.099293   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:57.174098   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:57.174118   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:57.174129   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:57.258833   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:57.258869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:57.224384   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.722547   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.533178   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.533798   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.035089   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.665383   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:00.164920   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.800644   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:59.814728   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:59.814805   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:59.854081   72122 cri.go:89] found id: ""
	I0910 19:01:59.854113   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.854124   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:59.854133   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:59.854197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:59.889524   72122 cri.go:89] found id: ""
	I0910 19:01:59.889550   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.889560   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:59.889567   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:59.889626   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:59.925833   72122 cri.go:89] found id: ""
	I0910 19:01:59.925859   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.925866   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:59.925872   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:59.925935   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:59.962538   72122 cri.go:89] found id: ""
	I0910 19:01:59.962575   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.962586   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:59.962593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:59.962650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:59.996994   72122 cri.go:89] found id: ""
	I0910 19:01:59.997025   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.997037   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:59.997045   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:59.997126   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:00.032881   72122 cri.go:89] found id: ""
	I0910 19:02:00.032905   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.032915   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:00.032923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:00.032988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:00.065838   72122 cri.go:89] found id: ""
	I0910 19:02:00.065861   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.065869   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:00.065874   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:00.065927   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:00.099479   72122 cri.go:89] found id: ""
	I0910 19:02:00.099505   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.099516   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:00.099526   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:00.099540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:00.182661   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:00.182689   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:00.223514   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:00.223553   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:00.273695   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:00.273721   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:00.287207   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:00.287233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:00.353975   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:01.724647   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.224071   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.225475   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.534230   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.534474   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.665228   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.667935   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:07.163506   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.854145   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:02.867413   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:02.867484   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:02.904299   72122 cri.go:89] found id: ""
	I0910 19:02:02.904327   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.904335   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:02.904340   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:02.904392   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:02.940981   72122 cri.go:89] found id: ""
	I0910 19:02:02.941010   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.941019   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:02.941024   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:02.941099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:02.980013   72122 cri.go:89] found id: ""
	I0910 19:02:02.980038   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.980046   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:02.980052   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:02.980111   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:03.020041   72122 cri.go:89] found id: ""
	I0910 19:02:03.020071   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.020080   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:03.020087   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:03.020144   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:03.055228   72122 cri.go:89] found id: ""
	I0910 19:02:03.055264   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.055277   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:03.055285   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:03.055347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:03.088696   72122 cri.go:89] found id: ""
	I0910 19:02:03.088722   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.088730   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:03.088736   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:03.088787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:03.124753   72122 cri.go:89] found id: ""
	I0910 19:02:03.124776   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.124785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:03.124792   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:03.124849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:03.157191   72122 cri.go:89] found id: ""
	I0910 19:02:03.157222   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.157230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:03.157238   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:03.157248   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:03.239015   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:03.239044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:03.279323   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:03.279355   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:03.328034   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:03.328067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:03.341591   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:03.341620   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:03.411057   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:05.911503   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:05.924794   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:05.924868   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:05.958827   72122 cri.go:89] found id: ""
	I0910 19:02:05.958852   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.958859   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:05.958865   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:05.958920   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:05.992376   72122 cri.go:89] found id: ""
	I0910 19:02:05.992412   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.992423   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:05.992429   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:05.992485   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:06.028058   72122 cri.go:89] found id: ""
	I0910 19:02:06.028088   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.028098   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:06.028107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:06.028162   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:06.066428   72122 cri.go:89] found id: ""
	I0910 19:02:06.066458   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.066470   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:06.066477   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:06.066533   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:06.102750   72122 cri.go:89] found id: ""
	I0910 19:02:06.102774   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.102782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:06.102787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:06.102841   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:06.137216   72122 cri.go:89] found id: ""
	I0910 19:02:06.137243   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.137254   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:06.137261   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:06.137323   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:06.175227   72122 cri.go:89] found id: ""
	I0910 19:02:06.175251   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.175259   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:06.175265   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:06.175311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:06.210197   72122 cri.go:89] found id: ""
	I0910 19:02:06.210222   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.210230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:06.210238   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:06.210249   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:06.261317   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:06.261353   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:06.275196   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:06.275225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:06.354186   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:06.354205   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:06.354219   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:06.436726   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:06.436763   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:08.723505   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:10.724499   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.035939   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.534648   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.166629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.666941   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:08.979157   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:08.992097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:08.992156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:09.025260   72122 cri.go:89] found id: ""
	I0910 19:02:09.025282   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.025289   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:09.025295   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:09.025360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:09.059139   72122 cri.go:89] found id: ""
	I0910 19:02:09.059166   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.059177   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:09.059186   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:09.059240   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:09.092935   72122 cri.go:89] found id: ""
	I0910 19:02:09.092964   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.092973   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:09.092979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:09.093027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:09.127273   72122 cri.go:89] found id: ""
	I0910 19:02:09.127299   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.127310   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:09.127317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:09.127367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:09.163353   72122 cri.go:89] found id: ""
	I0910 19:02:09.163380   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.163389   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:09.163396   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:09.163453   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:09.195371   72122 cri.go:89] found id: ""
	I0910 19:02:09.195396   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.195407   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:09.195414   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:09.195473   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:09.229338   72122 cri.go:89] found id: ""
	I0910 19:02:09.229361   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.229370   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:09.229376   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:09.229432   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:09.262822   72122 cri.go:89] found id: ""
	I0910 19:02:09.262847   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.262857   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:09.262874   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:09.262891   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:09.330079   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:09.330103   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:09.330119   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:09.408969   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:09.409003   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:09.447666   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:09.447702   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:09.501111   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:09.501141   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.016407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:12.030822   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:12.030905   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:12.069191   72122 cri.go:89] found id: ""
	I0910 19:02:12.069218   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.069229   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:12.069236   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:12.069306   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:12.103687   72122 cri.go:89] found id: ""
	I0910 19:02:12.103726   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.103737   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:12.103862   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:12.103937   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:12.142891   72122 cri.go:89] found id: ""
	I0910 19:02:12.142920   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.142932   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:12.142940   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:12.142998   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:12.178966   72122 cri.go:89] found id: ""
	I0910 19:02:12.178991   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.179002   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:12.179010   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:12.179069   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:12.216070   72122 cri.go:89] found id: ""
	I0910 19:02:12.216093   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.216104   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:12.216112   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:12.216161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:12.251447   72122 cri.go:89] found id: ""
	I0910 19:02:12.251479   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.251492   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:12.251500   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:12.251568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:12.284640   72122 cri.go:89] found id: ""
	I0910 19:02:12.284666   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.284677   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:12.284682   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:12.284743   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:12.319601   72122 cri.go:89] found id: ""
	I0910 19:02:12.319625   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.319632   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:12.319639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:12.319650   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:12.372932   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:12.372965   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.387204   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:12.387228   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:12.459288   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:12.459308   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:12.459323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:13.223679   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:15.224341   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:14.034036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.533341   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:13.667258   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.164610   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:12.549161   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:12.549198   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:15.092557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:15.105391   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:15.105456   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:15.139486   72122 cri.go:89] found id: ""
	I0910 19:02:15.139515   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.139524   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:15.139530   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:15.139591   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:15.173604   72122 cri.go:89] found id: ""
	I0910 19:02:15.173630   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.173641   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:15.173648   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:15.173710   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:15.208464   72122 cri.go:89] found id: ""
	I0910 19:02:15.208492   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.208503   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:15.208510   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:15.208568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:15.247536   72122 cri.go:89] found id: ""
	I0910 19:02:15.247567   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.247579   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:15.247586   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:15.247650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:15.285734   72122 cri.go:89] found id: ""
	I0910 19:02:15.285764   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.285775   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:15.285782   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:15.285858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:15.320755   72122 cri.go:89] found id: ""
	I0910 19:02:15.320782   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.320792   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:15.320798   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:15.320849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:15.357355   72122 cri.go:89] found id: ""
	I0910 19:02:15.357384   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.357395   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:15.357402   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:15.357463   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:15.392105   72122 cri.go:89] found id: ""
	I0910 19:02:15.392130   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.392137   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:15.392149   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:15.392160   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:15.444433   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:15.444465   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:15.458759   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:15.458784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:15.523490   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:15.523507   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:15.523524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:15.607584   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:15.607616   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:17.224472   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:19.723953   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.534545   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.667949   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.669762   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.146611   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:18.160311   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:18.160378   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:18.195072   72122 cri.go:89] found id: ""
	I0910 19:02:18.195099   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.195109   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:18.195127   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:18.195201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:18.230099   72122 cri.go:89] found id: ""
	I0910 19:02:18.230129   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.230138   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:18.230145   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:18.230201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:18.268497   72122 cri.go:89] found id: ""
	I0910 19:02:18.268525   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.268534   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:18.268539   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:18.268599   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:18.304929   72122 cri.go:89] found id: ""
	I0910 19:02:18.304966   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.304978   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:18.304985   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:18.305048   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:18.339805   72122 cri.go:89] found id: ""
	I0910 19:02:18.339839   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.339861   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:18.339868   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:18.339925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:18.378353   72122 cri.go:89] found id: ""
	I0910 19:02:18.378372   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.378381   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:18.378393   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:18.378438   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:18.415175   72122 cri.go:89] found id: ""
	I0910 19:02:18.415195   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.415203   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:18.415208   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:18.415262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:18.450738   72122 cri.go:89] found id: ""
	I0910 19:02:18.450762   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.450769   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:18.450778   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:18.450793   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:18.530943   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:18.530975   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:18.568983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:18.569021   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:18.622301   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:18.622336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:18.635788   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:18.635815   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:18.715729   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.216082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:21.229419   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:21.229488   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:21.265152   72122 cri.go:89] found id: ""
	I0910 19:02:21.265183   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.265193   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:21.265201   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:21.265262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:21.300766   72122 cri.go:89] found id: ""
	I0910 19:02:21.300797   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.300815   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:21.300823   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:21.300883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:21.333416   72122 cri.go:89] found id: ""
	I0910 19:02:21.333443   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.333452   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:21.333460   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:21.333526   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:21.371112   72122 cri.go:89] found id: ""
	I0910 19:02:21.371142   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.371150   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:21.371156   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:21.371214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:21.405657   72122 cri.go:89] found id: ""
	I0910 19:02:21.405684   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.405695   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:21.405703   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:21.405755   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:21.440354   72122 cri.go:89] found id: ""
	I0910 19:02:21.440381   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.440392   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:21.440400   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:21.440464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:21.480165   72122 cri.go:89] found id: ""
	I0910 19:02:21.480189   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.480199   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:21.480206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:21.480273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:21.518422   72122 cri.go:89] found id: ""
	I0910 19:02:21.518449   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.518459   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:21.518470   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:21.518486   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:21.572263   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:21.572300   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:21.588179   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:21.588204   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:21.658330   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.658356   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:21.658371   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:21.743026   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:21.743063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:21.724730   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.724844   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:26.225026   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.034593   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.037588   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.164712   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.664475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:24.286604   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:24.299783   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:24.299847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:24.336998   72122 cri.go:89] found id: ""
	I0910 19:02:24.337031   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.337042   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:24.337050   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:24.337123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:24.374198   72122 cri.go:89] found id: ""
	I0910 19:02:24.374223   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.374231   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:24.374236   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:24.374289   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:24.407783   72122 cri.go:89] found id: ""
	I0910 19:02:24.407812   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.407822   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:24.407828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:24.407881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:24.443285   72122 cri.go:89] found id: ""
	I0910 19:02:24.443307   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.443315   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:24.443321   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:24.443367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:24.477176   72122 cri.go:89] found id: ""
	I0910 19:02:24.477198   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.477206   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:24.477212   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:24.477266   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:24.509762   72122 cri.go:89] found id: ""
	I0910 19:02:24.509783   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.509791   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:24.509797   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:24.509858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:24.548746   72122 cri.go:89] found id: ""
	I0910 19:02:24.548775   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.548785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:24.548793   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:24.548851   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:24.583265   72122 cri.go:89] found id: ""
	I0910 19:02:24.583297   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.583313   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:24.583324   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:24.583338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:24.634966   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:24.635001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:24.649844   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:24.649869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:24.721795   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:24.721824   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:24.721840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:24.807559   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:24.807593   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.352779   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:27.366423   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:27.366495   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:27.399555   72122 cri.go:89] found id: ""
	I0910 19:02:27.399582   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.399591   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:27.399596   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:27.399662   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:27.434151   72122 cri.go:89] found id: ""
	I0910 19:02:27.434179   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.434188   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:27.434194   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:27.434265   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:27.467053   72122 cri.go:89] found id: ""
	I0910 19:02:27.467081   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.467092   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:27.467099   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:27.467156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:28.724149   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:31.224185   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.533697   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:29.533815   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.034343   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.667816   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:30.164174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.500999   72122 cri.go:89] found id: ""
	I0910 19:02:27.501030   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.501039   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:27.501044   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:27.501114   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:27.537981   72122 cri.go:89] found id: ""
	I0910 19:02:27.538000   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.538007   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:27.538012   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:27.538060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:27.568622   72122 cri.go:89] found id: ""
	I0910 19:02:27.568649   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.568660   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:27.568668   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:27.568724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:27.603035   72122 cri.go:89] found id: ""
	I0910 19:02:27.603058   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.603067   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:27.603072   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:27.603131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:27.637624   72122 cri.go:89] found id: ""
	I0910 19:02:27.637651   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.637662   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:27.637673   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:27.637693   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:27.651893   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:27.651915   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:27.723949   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:27.723969   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:27.723983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:27.801463   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:27.801496   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.841969   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:27.842000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.398857   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:30.412720   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:30.412790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:30.448125   72122 cri.go:89] found id: ""
	I0910 19:02:30.448152   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.448163   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:30.448171   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:30.448234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:30.481988   72122 cri.go:89] found id: ""
	I0910 19:02:30.482016   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.482027   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:30.482035   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:30.482083   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:30.516548   72122 cri.go:89] found id: ""
	I0910 19:02:30.516576   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.516583   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:30.516589   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:30.516646   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:30.566884   72122 cri.go:89] found id: ""
	I0910 19:02:30.566910   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.566918   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:30.566923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:30.566975   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:30.602278   72122 cri.go:89] found id: ""
	I0910 19:02:30.602306   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.602314   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:30.602319   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:30.602379   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:30.636708   72122 cri.go:89] found id: ""
	I0910 19:02:30.636732   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.636740   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:30.636745   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:30.636797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:30.681255   72122 cri.go:89] found id: ""
	I0910 19:02:30.681280   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.681295   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:30.681303   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:30.681361   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:30.715516   72122 cri.go:89] found id: ""
	I0910 19:02:30.715543   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.715551   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:30.715560   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:30.715572   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.768916   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:30.768948   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:30.783318   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:30.783348   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:30.852901   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:30.852925   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:30.852940   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:30.932276   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:30.932314   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.725716   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.223970   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:34.533148   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.533854   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.667516   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:35.164375   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:33.471931   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:33.486152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:33.486211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:33.524130   72122 cri.go:89] found id: ""
	I0910 19:02:33.524161   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.524173   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:33.524180   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:33.524238   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:33.562216   72122 cri.go:89] found id: ""
	I0910 19:02:33.562238   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.562247   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:33.562252   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:33.562305   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:33.596587   72122 cri.go:89] found id: ""
	I0910 19:02:33.596615   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.596626   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:33.596634   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:33.596692   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:33.633307   72122 cri.go:89] found id: ""
	I0910 19:02:33.633330   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.633338   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:33.633343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:33.633403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:33.667780   72122 cri.go:89] found id: ""
	I0910 19:02:33.667805   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.667815   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:33.667820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:33.667878   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:33.702406   72122 cri.go:89] found id: ""
	I0910 19:02:33.702436   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.702447   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:33.702456   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:33.702524   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:33.744544   72122 cri.go:89] found id: ""
	I0910 19:02:33.744574   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.744581   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:33.744587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:33.744661   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:33.782000   72122 cri.go:89] found id: ""
	I0910 19:02:33.782024   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.782032   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:33.782040   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:33.782053   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:33.858087   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:33.858115   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:33.858133   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:33.943238   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:33.943278   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.987776   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:33.987804   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:34.043197   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:34.043232   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.558122   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:36.571125   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:36.571195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:36.606195   72122 cri.go:89] found id: ""
	I0910 19:02:36.606228   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.606239   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:36.606246   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:36.606304   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:36.640248   72122 cri.go:89] found id: ""
	I0910 19:02:36.640290   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.640302   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:36.640310   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:36.640360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:36.676916   72122 cri.go:89] found id: ""
	I0910 19:02:36.676942   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.676952   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:36.676958   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:36.677013   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:36.713183   72122 cri.go:89] found id: ""
	I0910 19:02:36.713207   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.713218   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:36.713225   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:36.713283   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:36.750748   72122 cri.go:89] found id: ""
	I0910 19:02:36.750775   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.750782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:36.750787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:36.750847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:36.782614   72122 cri.go:89] found id: ""
	I0910 19:02:36.782636   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.782644   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:36.782650   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:36.782709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:36.822051   72122 cri.go:89] found id: ""
	I0910 19:02:36.822083   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.822094   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:36.822102   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:36.822161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:36.856068   72122 cri.go:89] found id: ""
	I0910 19:02:36.856096   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.856106   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:36.856117   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:36.856132   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:36.909586   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:36.909625   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.931649   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:36.931676   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:37.040146   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:37.040175   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:37.040194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:37.121902   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:37.121942   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:38.723762   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:40.723880   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:38.534001   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:41.035356   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:37.665212   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.668115   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.164118   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.665474   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:39.678573   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:39.678633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:39.712755   72122 cri.go:89] found id: ""
	I0910 19:02:39.712783   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.712793   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:39.712800   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:39.712857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:39.744709   72122 cri.go:89] found id: ""
	I0910 19:02:39.744738   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.744748   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:39.744756   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:39.744809   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:39.780161   72122 cri.go:89] found id: ""
	I0910 19:02:39.780189   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.780200   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:39.780207   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:39.780255   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:39.817665   72122 cri.go:89] found id: ""
	I0910 19:02:39.817695   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.817704   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:39.817709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:39.817757   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:39.857255   72122 cri.go:89] found id: ""
	I0910 19:02:39.857291   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.857299   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:39.857306   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:39.857381   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:39.893514   72122 cri.go:89] found id: ""
	I0910 19:02:39.893540   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.893550   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:39.893558   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:39.893614   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:39.932720   72122 cri.go:89] found id: ""
	I0910 19:02:39.932753   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.932767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:39.932775   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:39.932835   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:39.977063   72122 cri.go:89] found id: ""
	I0910 19:02:39.977121   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.977135   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:39.977146   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:39.977168   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:39.991414   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:39.991445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:40.066892   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:40.066910   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:40.066922   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:40.150648   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:40.150680   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:40.198519   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:40.198561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:42.724332   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.223804   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:43.533841   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.534665   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:44.164851   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:46.165259   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.749906   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:42.769633   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:42.769703   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:42.812576   72122 cri.go:89] found id: ""
	I0910 19:02:42.812603   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.812613   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:42.812620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:42.812682   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:42.846233   72122 cri.go:89] found id: ""
	I0910 19:02:42.846257   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.846266   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:42.846271   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:42.846326   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:42.883564   72122 cri.go:89] found id: ""
	I0910 19:02:42.883593   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.883605   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:42.883612   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:42.883669   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:42.920774   72122 cri.go:89] found id: ""
	I0910 19:02:42.920801   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.920813   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:42.920820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:42.920883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:42.953776   72122 cri.go:89] found id: ""
	I0910 19:02:42.953808   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.953820   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:42.953829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:42.953887   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:42.989770   72122 cri.go:89] found id: ""
	I0910 19:02:42.989806   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.989820   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:42.989829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:42.989893   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:43.022542   72122 cri.go:89] found id: ""
	I0910 19:02:43.022567   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.022574   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:43.022580   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:43.022629   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:43.064308   72122 cri.go:89] found id: ""
	I0910 19:02:43.064329   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.064337   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:43.064344   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:43.064356   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:43.120212   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:43.120243   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:43.134269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:43.134296   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:43.218840   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:43.218865   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:43.218880   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:43.302560   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:43.302591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:45.842788   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:45.857495   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:45.857557   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:45.892745   72122 cri.go:89] found id: ""
	I0910 19:02:45.892772   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.892782   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:45.892790   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:45.892888   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:45.928451   72122 cri.go:89] found id: ""
	I0910 19:02:45.928476   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.928486   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:45.928493   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:45.928551   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:45.962868   72122 cri.go:89] found id: ""
	I0910 19:02:45.962899   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.962910   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:45.962918   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:45.962979   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:45.996975   72122 cri.go:89] found id: ""
	I0910 19:02:45.997000   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.997009   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:45.997014   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:45.997065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:46.032271   72122 cri.go:89] found id: ""
	I0910 19:02:46.032299   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.032309   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:46.032317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:46.032375   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:46.072629   72122 cri.go:89] found id: ""
	I0910 19:02:46.072654   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.072662   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:46.072667   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:46.072713   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:46.112196   72122 cri.go:89] found id: ""
	I0910 19:02:46.112220   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.112228   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:46.112233   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:46.112298   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:46.155700   72122 cri.go:89] found id: ""
	I0910 19:02:46.155732   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.155745   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:46.155759   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:46.155794   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:46.210596   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:46.210624   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:46.224951   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:46.224980   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:46.294571   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:46.294597   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:46.294613   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:46.382431   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:46.382495   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:47.224808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:49.225392   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:51.227601   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.033643   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.535490   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.665543   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.666596   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.926582   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:48.941256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:48.941338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:48.979810   72122 cri.go:89] found id: ""
	I0910 19:02:48.979842   72122 logs.go:276] 0 containers: []
	W0910 19:02:48.979849   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:48.979856   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:48.979917   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:49.015083   72122 cri.go:89] found id: ""
	I0910 19:02:49.015126   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.015136   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:49.015144   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:49.015205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:49.052417   72122 cri.go:89] found id: ""
	I0910 19:02:49.052445   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.052453   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:49.052459   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:49.052511   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:49.092485   72122 cri.go:89] found id: ""
	I0910 19:02:49.092523   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.092533   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:49.092538   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:49.092588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:49.127850   72122 cri.go:89] found id: ""
	I0910 19:02:49.127882   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.127889   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:49.127897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:49.127952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:49.160693   72122 cri.go:89] found id: ""
	I0910 19:02:49.160724   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.160733   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:49.160740   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:49.160798   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:49.194713   72122 cri.go:89] found id: ""
	I0910 19:02:49.194737   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.194744   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:49.194750   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:49.194804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:49.229260   72122 cri.go:89] found id: ""
	I0910 19:02:49.229283   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.229292   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:49.229303   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:49.229320   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:49.281963   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:49.281992   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:49.294789   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:49.294809   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:49.366126   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:49.366152   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:49.366172   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:49.451187   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:49.451225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:51.990361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:52.003744   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:52.003807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:52.036794   72122 cri.go:89] found id: ""
	I0910 19:02:52.036824   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.036834   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:52.036840   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:52.036896   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:52.074590   72122 cri.go:89] found id: ""
	I0910 19:02:52.074613   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.074620   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:52.074625   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:52.074675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:52.119926   72122 cri.go:89] found id: ""
	I0910 19:02:52.119967   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.119981   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:52.119990   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:52.120075   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:52.157862   72122 cri.go:89] found id: ""
	I0910 19:02:52.157889   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.157900   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:52.157906   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:52.157968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:52.198645   72122 cri.go:89] found id: ""
	I0910 19:02:52.198675   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.198686   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:52.198693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:52.198756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:52.240091   72122 cri.go:89] found id: ""
	I0910 19:02:52.240113   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.240129   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:52.240139   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:52.240197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:52.275046   72122 cri.go:89] found id: ""
	I0910 19:02:52.275079   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.275090   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:52.275098   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:52.275179   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:52.311141   72122 cri.go:89] found id: ""
	I0910 19:02:52.311172   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.311184   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:52.311196   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:52.311211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:52.400004   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:52.400039   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:52.449043   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:52.449090   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:53.724151   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:56.223353   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.033328   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.035259   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.164639   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.165714   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:52.502304   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:52.502336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:52.518747   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:52.518772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:52.593581   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.094092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:55.108752   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:55.108830   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:55.143094   72122 cri.go:89] found id: ""
	I0910 19:02:55.143122   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.143133   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:55.143141   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:55.143198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:55.184298   72122 cri.go:89] found id: ""
	I0910 19:02:55.184326   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.184334   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:55.184340   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:55.184397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:55.216557   72122 cri.go:89] found id: ""
	I0910 19:02:55.216585   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.216596   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:55.216613   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:55.216676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:55.251049   72122 cri.go:89] found id: ""
	I0910 19:02:55.251075   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.251083   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:55.251090   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:55.251152   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:55.282689   72122 cri.go:89] found id: ""
	I0910 19:02:55.282716   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.282724   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:55.282729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:55.282800   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:55.316959   72122 cri.go:89] found id: ""
	I0910 19:02:55.316993   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.317004   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:55.317011   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:55.317085   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:55.353110   72122 cri.go:89] found id: ""
	I0910 19:02:55.353134   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.353143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:55.353149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:55.353205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:55.392391   72122 cri.go:89] found id: ""
	I0910 19:02:55.392422   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.392434   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:55.392446   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:55.392461   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:55.445431   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:55.445469   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:55.459348   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:55.459374   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:55.528934   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.528957   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:55.528973   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:55.610797   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:55.610833   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:58.223882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.223951   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.533754   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:59.535018   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:01.535255   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.667276   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.164510   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:58.152775   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:58.166383   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:58.166440   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:58.203198   72122 cri.go:89] found id: ""
	I0910 19:02:58.203225   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.203233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:58.203239   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:58.203284   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:58.240538   72122 cri.go:89] found id: ""
	I0910 19:02:58.240560   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.240567   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:58.240573   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:58.240633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:58.274802   72122 cri.go:89] found id: ""
	I0910 19:02:58.274826   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.274833   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:58.274839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:58.274886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:58.311823   72122 cri.go:89] found id: ""
	I0910 19:02:58.311857   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.311868   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:58.311876   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:58.311933   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:58.347260   72122 cri.go:89] found id: ""
	I0910 19:02:58.347281   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.347288   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:58.347294   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:58.347338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:58.382621   72122 cri.go:89] found id: ""
	I0910 19:02:58.382645   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.382655   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:58.382662   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:58.382720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:58.418572   72122 cri.go:89] found id: ""
	I0910 19:02:58.418597   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.418605   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:58.418611   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:58.418663   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:58.459955   72122 cri.go:89] found id: ""
	I0910 19:02:58.459987   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.459995   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:58.460003   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:58.460016   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:58.512831   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:58.512866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:58.527036   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:58.527067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:58.593329   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:58.593350   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:58.593366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:58.671171   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:58.671201   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.211905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:01.226567   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:01.226724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:01.261860   72122 cri.go:89] found id: ""
	I0910 19:03:01.261885   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.261893   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:01.261898   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:01.261946   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:01.294754   72122 cri.go:89] found id: ""
	I0910 19:03:01.294774   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.294781   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:01.294786   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:01.294833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:01.328378   72122 cri.go:89] found id: ""
	I0910 19:03:01.328403   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.328412   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:01.328417   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:01.328465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:01.363344   72122 cri.go:89] found id: ""
	I0910 19:03:01.363370   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.363380   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:01.363388   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:01.363446   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:01.398539   72122 cri.go:89] found id: ""
	I0910 19:03:01.398576   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.398586   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:01.398593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:01.398654   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:01.431367   72122 cri.go:89] found id: ""
	I0910 19:03:01.431390   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.431397   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:01.431403   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:01.431458   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:01.464562   72122 cri.go:89] found id: ""
	I0910 19:03:01.464589   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.464599   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:01.464606   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:01.464666   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:01.497493   72122 cri.go:89] found id: ""
	I0910 19:03:01.497520   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.497531   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:01.497540   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:01.497555   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:01.583083   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:01.583140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.624887   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:01.624919   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:01.676124   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:01.676155   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:01.690861   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:01.690894   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:01.763695   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:02.724017   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.725049   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.033371   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:06.033600   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:02.666137   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.669740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.164822   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.264867   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:04.279106   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:04.279176   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:04.315358   72122 cri.go:89] found id: ""
	I0910 19:03:04.315390   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.315398   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:04.315403   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:04.315457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:04.359466   72122 cri.go:89] found id: ""
	I0910 19:03:04.359489   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.359496   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:04.359504   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:04.359563   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:04.399504   72122 cri.go:89] found id: ""
	I0910 19:03:04.399529   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.399538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:04.399545   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:04.399604   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:04.438244   72122 cri.go:89] found id: ""
	I0910 19:03:04.438269   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.438277   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:04.438282   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:04.438340   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:04.475299   72122 cri.go:89] found id: ""
	I0910 19:03:04.475321   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.475329   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:04.475334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:04.475386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:04.516500   72122 cri.go:89] found id: ""
	I0910 19:03:04.516520   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.516529   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:04.516534   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:04.516588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:04.551191   72122 cri.go:89] found id: ""
	I0910 19:03:04.551214   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.551222   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:04.551228   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:04.551273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:04.585646   72122 cri.go:89] found id: ""
	I0910 19:03:04.585667   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.585675   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:04.585684   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:04.585699   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:04.598832   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:04.598858   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:04.670117   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:04.670140   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:04.670156   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:04.746592   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:04.746626   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:04.784061   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:04.784088   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.337082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:07.350696   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:07.350752   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:07.387344   72122 cri.go:89] found id: ""
	I0910 19:03:07.387373   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.387384   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:07.387391   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:07.387449   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:07.420468   72122 cri.go:89] found id: ""
	I0910 19:03:07.420490   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.420498   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:07.420503   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:07.420566   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:07.453746   72122 cri.go:89] found id: ""
	I0910 19:03:07.453773   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.453784   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:07.453791   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:07.453845   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:07.487359   72122 cri.go:89] found id: ""
	I0910 19:03:07.487388   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.487400   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:07.487407   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:07.487470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:07.223432   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.723164   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:08.033767   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:10.035613   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.165972   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:11.663740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.520803   72122 cri.go:89] found id: ""
	I0910 19:03:07.520827   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.520834   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:07.520839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:07.520898   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:07.556908   72122 cri.go:89] found id: ""
	I0910 19:03:07.556934   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.556945   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:07.556953   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:07.557017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:07.596072   72122 cri.go:89] found id: ""
	I0910 19:03:07.596093   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.596102   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:07.596107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:07.596165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:07.631591   72122 cri.go:89] found id: ""
	I0910 19:03:07.631620   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.631630   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:07.631639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:07.631661   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.683892   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:07.683923   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:07.697619   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:07.697645   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:07.766370   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:07.766397   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:07.766413   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:07.854102   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:07.854140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:10.400185   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:10.412771   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:10.412842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:10.447710   72122 cri.go:89] found id: ""
	I0910 19:03:10.447739   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.447750   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:10.447757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:10.447822   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:10.480865   72122 cri.go:89] found id: ""
	I0910 19:03:10.480892   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.480902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:10.480909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:10.480966   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:10.514893   72122 cri.go:89] found id: ""
	I0910 19:03:10.514919   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.514927   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:10.514933   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:10.514994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:10.556332   72122 cri.go:89] found id: ""
	I0910 19:03:10.556374   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.556385   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:10.556392   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:10.556457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:10.590529   72122 cri.go:89] found id: ""
	I0910 19:03:10.590562   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.590573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:10.590581   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:10.590642   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:10.623697   72122 cri.go:89] found id: ""
	I0910 19:03:10.623724   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.623732   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:10.623737   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:10.623788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:10.659236   72122 cri.go:89] found id: ""
	I0910 19:03:10.659259   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.659270   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:10.659277   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:10.659338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:10.693150   72122 cri.go:89] found id: ""
	I0910 19:03:10.693182   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.693192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:10.693202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:10.693217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:10.744624   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:10.744663   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:10.758797   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:10.758822   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:10.853796   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:10.853815   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:10.853827   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:10.937972   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:10.938019   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:11.724808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:14.224052   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:12.535134   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:15.033867   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:17.034507   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.667548   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:16.164483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.481898   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:13.495440   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:13.495505   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:13.531423   72122 cri.go:89] found id: ""
	I0910 19:03:13.531452   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.531463   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:13.531470   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:13.531532   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:13.571584   72122 cri.go:89] found id: ""
	I0910 19:03:13.571607   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.571615   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:13.571620   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:13.571674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:13.609670   72122 cri.go:89] found id: ""
	I0910 19:03:13.609695   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.609702   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:13.609707   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:13.609761   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:13.644726   72122 cri.go:89] found id: ""
	I0910 19:03:13.644755   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.644766   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:13.644773   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:13.644831   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:13.679692   72122 cri.go:89] found id: ""
	I0910 19:03:13.679722   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.679733   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:13.679741   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:13.679791   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:13.717148   72122 cri.go:89] found id: ""
	I0910 19:03:13.717177   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.717186   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:13.717192   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:13.717247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:13.755650   72122 cri.go:89] found id: ""
	I0910 19:03:13.755676   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.755688   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:13.755693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:13.755740   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:13.788129   72122 cri.go:89] found id: ""
	I0910 19:03:13.788158   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.788169   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:13.788179   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:13.788194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:13.865241   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:13.865277   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:13.909205   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:13.909233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:13.963495   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:13.963523   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:13.977311   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:13.977337   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:14.047015   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:16.547505   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:16.568333   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:16.568412   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:16.610705   72122 cri.go:89] found id: ""
	I0910 19:03:16.610734   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.610744   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:16.610752   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:16.610808   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:16.647307   72122 cri.go:89] found id: ""
	I0910 19:03:16.647333   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.647340   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:16.647345   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:16.647409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:16.684513   72122 cri.go:89] found id: ""
	I0910 19:03:16.684536   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.684544   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:16.684549   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:16.684602   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:16.718691   72122 cri.go:89] found id: ""
	I0910 19:03:16.718719   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.718729   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:16.718734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:16.718794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:16.753250   72122 cri.go:89] found id: ""
	I0910 19:03:16.753279   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.753291   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:16.753298   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:16.753358   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:16.788953   72122 cri.go:89] found id: ""
	I0910 19:03:16.788984   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.789001   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:16.789009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:16.789084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:16.823715   72122 cri.go:89] found id: ""
	I0910 19:03:16.823746   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.823760   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:16.823767   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:16.823837   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:16.858734   72122 cri.go:89] found id: ""
	I0910 19:03:16.858758   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.858770   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:16.858780   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:16.858795   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:16.897983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:16.898012   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:16.950981   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:16.951015   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:16.964809   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:16.964839   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:17.039142   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:17.039163   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:17.039177   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:16.724218   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.223909   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.533783   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:21.534203   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:18.164708   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:20.664302   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.619941   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:19.634432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:19.634489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:19.671220   72122 cri.go:89] found id: ""
	I0910 19:03:19.671246   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.671256   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:19.671264   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:19.671322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:19.704251   72122 cri.go:89] found id: ""
	I0910 19:03:19.704278   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.704294   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:19.704301   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:19.704347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:19.745366   72122 cri.go:89] found id: ""
	I0910 19:03:19.745393   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.745403   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:19.745410   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:19.745466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:19.781100   72122 cri.go:89] found id: ""
	I0910 19:03:19.781129   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.781136   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:19.781141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:19.781195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:19.817177   72122 cri.go:89] found id: ""
	I0910 19:03:19.817207   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.817219   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:19.817226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:19.817292   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:19.852798   72122 cri.go:89] found id: ""
	I0910 19:03:19.852829   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.852837   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:19.852842   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:19.852889   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:19.887173   72122 cri.go:89] found id: ""
	I0910 19:03:19.887200   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.887210   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:19.887219   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:19.887409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:19.922997   72122 cri.go:89] found id: ""
	I0910 19:03:19.923026   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.923038   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:19.923049   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:19.923063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:19.975703   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:19.975736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:19.989834   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:19.989866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:20.061312   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:20.061332   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:20.061344   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:20.143045   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:20.143080   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:21.723250   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:23.723771   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.724346   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:24.036790   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:26.533830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.664756   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.164650   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.681900   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:22.694860   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:22.694923   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:22.738529   72122 cri.go:89] found id: ""
	I0910 19:03:22.738553   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.738563   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:22.738570   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:22.738640   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:22.778102   72122 cri.go:89] found id: ""
	I0910 19:03:22.778132   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.778143   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:22.778150   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:22.778207   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:22.813273   72122 cri.go:89] found id: ""
	I0910 19:03:22.813307   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.813320   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:22.813334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:22.813397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:22.849613   72122 cri.go:89] found id: ""
	I0910 19:03:22.849637   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.849646   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:22.849651   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:22.849701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:22.883138   72122 cri.go:89] found id: ""
	I0910 19:03:22.883167   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.883178   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:22.883185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:22.883237   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:22.918521   72122 cri.go:89] found id: ""
	I0910 19:03:22.918550   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.918567   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:22.918574   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:22.918632   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:22.966657   72122 cri.go:89] found id: ""
	I0910 19:03:22.966684   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.966691   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:22.966701   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:22.966762   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:23.022254   72122 cri.go:89] found id: ""
	I0910 19:03:23.022282   72122 logs.go:276] 0 containers: []
	W0910 19:03:23.022290   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:23.022298   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:23.022309   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:23.082347   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:23.082386   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:23.096792   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:23.096814   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:23.172720   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:23.172740   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:23.172754   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:23.256155   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:23.256193   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:25.797211   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:25.810175   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:25.810234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:25.844848   72122 cri.go:89] found id: ""
	I0910 19:03:25.844876   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.844886   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:25.844901   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:25.844968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:25.877705   72122 cri.go:89] found id: ""
	I0910 19:03:25.877736   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.877747   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:25.877755   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:25.877807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:25.913210   72122 cri.go:89] found id: ""
	I0910 19:03:25.913238   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.913248   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:25.913256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:25.913316   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:25.947949   72122 cri.go:89] found id: ""
	I0910 19:03:25.947974   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.947984   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:25.947991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:25.948050   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:25.983487   72122 cri.go:89] found id: ""
	I0910 19:03:25.983511   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.983519   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:25.983524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:25.983573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:26.018176   72122 cri.go:89] found id: ""
	I0910 19:03:26.018201   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.018209   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:26.018214   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:26.018271   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:26.052063   72122 cri.go:89] found id: ""
	I0910 19:03:26.052087   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.052097   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:26.052104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:26.052165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:26.091919   72122 cri.go:89] found id: ""
	I0910 19:03:26.091949   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.091958   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:26.091968   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:26.091983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:26.146059   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:26.146094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:26.160529   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:26.160562   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:26.230742   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:26.230764   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:26.230778   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:26.313191   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:26.313222   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:27.724922   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:30.223811   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.039957   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:31.533256   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:27.665626   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.666857   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:32.165153   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:28.858457   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:28.873725   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:28.873788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:28.922685   72122 cri.go:89] found id: ""
	I0910 19:03:28.922717   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.922729   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:28.922737   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:28.922795   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:28.973236   72122 cri.go:89] found id: ""
	I0910 19:03:28.973260   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.973270   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:28.973277   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:28.973339   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:29.008999   72122 cri.go:89] found id: ""
	I0910 19:03:29.009049   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.009062   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:29.009081   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:29.009148   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:29.049009   72122 cri.go:89] found id: ""
	I0910 19:03:29.049037   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.049047   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:29.049056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:29.049131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:29.089543   72122 cri.go:89] found id: ""
	I0910 19:03:29.089564   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.089573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:29.089578   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:29.089648   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:29.126887   72122 cri.go:89] found id: ""
	I0910 19:03:29.126911   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.126918   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:29.126924   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:29.126969   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:29.161369   72122 cri.go:89] found id: ""
	I0910 19:03:29.161395   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.161405   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:29.161412   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:29.161474   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:29.199627   72122 cri.go:89] found id: ""
	I0910 19:03:29.199652   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.199661   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:29.199672   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:29.199691   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:29.268353   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:29.268386   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:29.268401   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:29.351470   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:29.351504   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:29.391768   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:29.391796   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:29.442705   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:29.442736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:31.957567   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:31.970218   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:31.970274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:32.004870   72122 cri.go:89] found id: ""
	I0910 19:03:32.004898   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.004908   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:32.004915   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:32.004971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:32.045291   72122 cri.go:89] found id: ""
	I0910 19:03:32.045322   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.045331   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:32.045337   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:32.045403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:32.085969   72122 cri.go:89] found id: ""
	I0910 19:03:32.085999   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.086007   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:32.086013   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:32.086067   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:32.120100   72122 cri.go:89] found id: ""
	I0910 19:03:32.120127   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.120135   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:32.120141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:32.120187   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:32.153977   72122 cri.go:89] found id: ""
	I0910 19:03:32.154004   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.154011   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:32.154016   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:32.154065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:32.195980   72122 cri.go:89] found id: ""
	I0910 19:03:32.196005   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.196013   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:32.196019   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:32.196068   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:32.233594   72122 cri.go:89] found id: ""
	I0910 19:03:32.233616   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.233623   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:32.233632   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:32.233677   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:32.268118   72122 cri.go:89] found id: ""
	I0910 19:03:32.268144   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.268152   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:32.268160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:32.268171   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:32.281389   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:32.281416   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:32.359267   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:32.359289   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:32.359304   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:32.445096   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:32.445137   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:32.483288   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:32.483325   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:32.224155   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.724191   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:33.537955   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.033801   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.663475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.665627   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:35.040393   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:35.053698   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:35.053750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:35.087712   72122 cri.go:89] found id: ""
	I0910 19:03:35.087742   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.087751   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:35.087757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:35.087802   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:35.125437   72122 cri.go:89] found id: ""
	I0910 19:03:35.125468   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.125482   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:35.125495   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:35.125562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:35.163885   72122 cri.go:89] found id: ""
	I0910 19:03:35.163914   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.163924   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:35.163931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:35.163989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:35.199426   72122 cri.go:89] found id: ""
	I0910 19:03:35.199459   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.199471   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:35.199479   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:35.199559   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:35.236388   72122 cri.go:89] found id: ""
	I0910 19:03:35.236408   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.236416   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:35.236421   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:35.236465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:35.274797   72122 cri.go:89] found id: ""
	I0910 19:03:35.274817   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.274825   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:35.274830   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:35.274874   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:35.308127   72122 cri.go:89] found id: ""
	I0910 19:03:35.308155   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.308166   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:35.308173   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:35.308230   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:35.340675   72122 cri.go:89] found id: ""
	I0910 19:03:35.340697   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.340704   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:35.340712   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:35.340727   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:35.390806   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:35.390842   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:35.404427   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:35.404458   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:35.471526   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:35.471560   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:35.471575   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:35.547469   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:35.547497   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:37.223464   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:39.224137   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.534280   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:40.534728   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.666077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.165483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.087127   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:38.100195   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:38.100251   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:38.135386   72122 cri.go:89] found id: ""
	I0910 19:03:38.135408   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.135416   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:38.135422   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:38.135480   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:38.168531   72122 cri.go:89] found id: ""
	I0910 19:03:38.168558   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.168568   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:38.168577   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:38.168639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:38.202931   72122 cri.go:89] found id: ""
	I0910 19:03:38.202958   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.202968   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:38.202974   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:38.203030   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:38.239185   72122 cri.go:89] found id: ""
	I0910 19:03:38.239209   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.239219   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:38.239226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:38.239279   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:38.276927   72122 cri.go:89] found id: ""
	I0910 19:03:38.276952   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.276961   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:38.276967   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:38.277035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:38.311923   72122 cri.go:89] found id: ""
	I0910 19:03:38.311951   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.311962   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:38.311971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:38.312034   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:38.344981   72122 cri.go:89] found id: ""
	I0910 19:03:38.345012   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.345023   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:38.345030   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:38.345099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:38.378012   72122 cri.go:89] found id: ""
	I0910 19:03:38.378037   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.378048   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:38.378058   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:38.378076   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:38.449361   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:38.449384   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:38.449396   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:38.530683   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:38.530713   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:38.570047   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:38.570073   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:38.620143   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:38.620176   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.134152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:41.148416   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:41.148509   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:41.186681   72122 cri.go:89] found id: ""
	I0910 19:03:41.186706   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.186713   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:41.186719   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:41.186767   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:41.221733   72122 cri.go:89] found id: ""
	I0910 19:03:41.221758   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.221769   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:41.221776   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:41.221834   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:41.256099   72122 cri.go:89] found id: ""
	I0910 19:03:41.256125   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.256136   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:41.256143   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:41.256194   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:41.289825   72122 cri.go:89] found id: ""
	I0910 19:03:41.289850   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.289860   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:41.289867   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:41.289926   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:41.323551   72122 cri.go:89] found id: ""
	I0910 19:03:41.323581   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.323594   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:41.323601   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:41.323659   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:41.356508   72122 cri.go:89] found id: ""
	I0910 19:03:41.356535   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.356546   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:41.356553   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:41.356608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:41.391556   72122 cri.go:89] found id: ""
	I0910 19:03:41.391579   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.391586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:41.391592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:41.391651   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:41.427685   72122 cri.go:89] found id: ""
	I0910 19:03:41.427711   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.427726   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:41.427743   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:41.427758   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:41.481970   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:41.482001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.495266   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:41.495290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:41.568334   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:41.568357   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:41.568370   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:41.650178   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:41.650211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:43.724494   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:46.223803   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.034100   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.035091   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.167877   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.664633   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:44.193665   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:44.209118   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:44.209197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:44.245792   72122 cri.go:89] found id: ""
	I0910 19:03:44.245819   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.245829   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:44.245834   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:44.245900   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:44.285673   72122 cri.go:89] found id: ""
	I0910 19:03:44.285699   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.285711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:44.285719   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:44.285787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:44.326471   72122 cri.go:89] found id: ""
	I0910 19:03:44.326495   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.326505   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:44.326520   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:44.326589   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:44.367864   72122 cri.go:89] found id: ""
	I0910 19:03:44.367890   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.367898   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:44.367907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:44.367954   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:44.407161   72122 cri.go:89] found id: ""
	I0910 19:03:44.407185   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.407193   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:44.407198   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:44.407256   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:44.446603   72122 cri.go:89] found id: ""
	I0910 19:03:44.446628   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.446638   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:44.446645   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:44.446705   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:44.486502   72122 cri.go:89] found id: ""
	I0910 19:03:44.486526   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.486536   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:44.486543   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:44.486605   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:44.524992   72122 cri.go:89] found id: ""
	I0910 19:03:44.525017   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.525025   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:44.525033   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:44.525044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:44.579387   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:44.579418   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:44.594045   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:44.594070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:44.678857   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:44.678883   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:44.678897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:44.763799   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:44.763830   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:47.305631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:47.319275   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:47.319347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:47.359199   72122 cri.go:89] found id: ""
	I0910 19:03:47.359222   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.359233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:47.359240   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:47.359300   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:47.397579   72122 cri.go:89] found id: ""
	I0910 19:03:47.397602   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.397610   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:47.397616   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:47.397674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:47.431114   72122 cri.go:89] found id: ""
	I0910 19:03:47.431138   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.431146   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:47.431151   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:47.431205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:47.470475   72122 cri.go:89] found id: ""
	I0910 19:03:47.470499   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.470509   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:47.470515   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:47.470573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:48.227625   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.725421   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.534967   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:49.027864   71529 pod_ready.go:82] duration metric: took 4m0.000448579s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:49.027890   71529 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0910 19:03:49.027905   71529 pod_ready.go:39] duration metric: took 4m14.536052937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:49.027929   71529 kubeadm.go:597] duration metric: took 4m22.283340761s to restartPrimaryControlPlane
	W0910 19:03:49.027982   71529 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:03:49.028009   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:03:47.668029   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.164077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.504484   72122 cri.go:89] found id: ""
	I0910 19:03:47.504509   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.504518   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:47.504524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:47.504577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:47.541633   72122 cri.go:89] found id: ""
	I0910 19:03:47.541651   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.541658   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:47.541663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:47.541706   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:47.579025   72122 cri.go:89] found id: ""
	I0910 19:03:47.579051   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.579060   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:47.579068   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:47.579123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:47.612333   72122 cri.go:89] found id: ""
	I0910 19:03:47.612359   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.612370   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:47.612380   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:47.612395   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:47.667214   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:47.667242   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:47.683425   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:47.683466   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:47.749510   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:47.749531   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:47.749543   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:47.830454   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:47.830487   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:50.373207   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:50.387191   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:50.387247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:50.422445   72122 cri.go:89] found id: ""
	I0910 19:03:50.422476   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.422488   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:50.422495   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:50.422562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:50.456123   72122 cri.go:89] found id: ""
	I0910 19:03:50.456145   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.456153   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:50.456157   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:50.456211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:50.488632   72122 cri.go:89] found id: ""
	I0910 19:03:50.488661   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.488672   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:50.488680   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:50.488736   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:50.523603   72122 cri.go:89] found id: ""
	I0910 19:03:50.523628   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.523636   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:50.523641   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:50.523699   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:50.559741   72122 cri.go:89] found id: ""
	I0910 19:03:50.559765   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.559773   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:50.559778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:50.559842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:50.595387   72122 cri.go:89] found id: ""
	I0910 19:03:50.595406   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.595414   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:50.595420   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:50.595472   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:50.628720   72122 cri.go:89] found id: ""
	I0910 19:03:50.628747   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.628767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:50.628774   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:50.628833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:50.660635   72122 cri.go:89] found id: ""
	I0910 19:03:50.660655   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.660663   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:50.660671   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:50.660683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:50.716517   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:50.716544   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:50.731411   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:50.731443   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:50.799252   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:50.799275   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:50.799290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:50.874490   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:50.874524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.222989   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225335   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225365   71627 pod_ready.go:82] duration metric: took 4m0.007907353s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:55.225523   71627 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:03:55.225534   71627 pod_ready.go:39] duration metric: took 4m2.40870138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:55.225551   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:03:55.225579   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:55.225629   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:55.270742   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:55.270761   71627 cri.go:89] found id: ""
	I0910 19:03:55.270768   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:55.270811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.276233   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:55.276283   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:55.316033   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:55.316051   71627 cri.go:89] found id: ""
	I0910 19:03:55.316058   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:55.316103   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.320441   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:55.320494   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:55.354406   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.354428   71627 cri.go:89] found id: ""
	I0910 19:03:55.354435   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:55.354482   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.358553   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:55.358621   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:55.393871   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.393896   71627 cri.go:89] found id: ""
	I0910 19:03:55.393904   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:55.393959   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.398102   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:55.398154   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:55.432605   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.432625   71627 cri.go:89] found id: ""
	I0910 19:03:55.432632   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:55.432686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.437631   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:55.437689   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:55.474250   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.474277   71627 cri.go:89] found id: ""
	I0910 19:03:55.474287   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:55.474352   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.479177   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:55.479235   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:55.514918   71627 cri.go:89] found id: ""
	I0910 19:03:55.514942   71627 logs.go:276] 0 containers: []
	W0910 19:03:55.514951   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:55.514956   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:55.515014   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:55.549310   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.549330   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.549335   71627 cri.go:89] found id: ""
	I0910 19:03:55.549347   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:55.549404   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.553420   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.557502   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:55.557531   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.592661   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:55.592685   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.629876   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:55.629908   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.668935   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:55.668963   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:55.685881   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:55.685906   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:55.815552   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:55.815578   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.854615   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:55.854640   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.906027   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:55.906069   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.943771   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:55.943808   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:52.666368   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.165213   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:53.417835   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:53.430627   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:53.430694   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:53.469953   72122 cri.go:89] found id: ""
	I0910 19:03:53.469981   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.469992   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:53.469999   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:53.470060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:53.503712   72122 cri.go:89] found id: ""
	I0910 19:03:53.503739   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.503750   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:53.503757   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:53.503814   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:53.539875   72122 cri.go:89] found id: ""
	I0910 19:03:53.539895   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.539902   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:53.539907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:53.539952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:53.575040   72122 cri.go:89] found id: ""
	I0910 19:03:53.575067   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.575078   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:53.575085   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:53.575159   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:53.611171   72122 cri.go:89] found id: ""
	I0910 19:03:53.611193   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.611201   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:53.611206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:53.611253   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:53.644467   72122 cri.go:89] found id: ""
	I0910 19:03:53.644494   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.644505   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:53.644513   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:53.644575   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:53.680886   72122 cri.go:89] found id: ""
	I0910 19:03:53.680913   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.680924   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:53.680931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:53.680989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:53.716834   72122 cri.go:89] found id: ""
	I0910 19:03:53.716863   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.716875   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:53.716885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:53.716900   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.755544   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:53.755568   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:53.807382   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:53.807411   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:53.820289   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:53.820311   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:53.891500   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:53.891524   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:53.891540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:56.472368   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:56.491939   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:56.492020   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:56.535575   72122 cri.go:89] found id: ""
	I0910 19:03:56.535603   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.535614   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:56.535620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:56.535672   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:56.570366   72122 cri.go:89] found id: ""
	I0910 19:03:56.570390   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.570398   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:56.570403   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:56.570452   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:56.609486   72122 cri.go:89] found id: ""
	I0910 19:03:56.609524   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.609535   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:56.609542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:56.609608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:56.650268   72122 cri.go:89] found id: ""
	I0910 19:03:56.650295   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.650305   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:56.650312   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:56.650371   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:56.689113   72122 cri.go:89] found id: ""
	I0910 19:03:56.689139   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.689146   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:56.689154   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:56.689214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:56.721546   72122 cri.go:89] found id: ""
	I0910 19:03:56.721568   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.721576   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:56.721582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:56.721639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:56.753149   72122 cri.go:89] found id: ""
	I0910 19:03:56.753171   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.753179   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:56.753185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:56.753233   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:56.786624   72122 cri.go:89] found id: ""
	I0910 19:03:56.786648   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.786658   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:56.786669   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.786683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.840243   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:56.840276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:56.854453   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:56.854475   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:56.928814   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:56.928835   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:56.928849   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:57.012360   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:57.012403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.443638   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:03:56.443684   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.498856   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.498897   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.573520   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:56.573548   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:56.621270   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:56.621301   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.173747   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.190441   71627 api_server.go:72] duration metric: took 4m14.110101643s to wait for apiserver process to appear ...
	I0910 19:03:59.190463   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:03:59.190495   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.190539   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.224716   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.224744   71627 cri.go:89] found id: ""
	I0910 19:03:59.224753   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:59.224811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.229345   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.229412   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.263589   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.263622   71627 cri.go:89] found id: ""
	I0910 19:03:59.263630   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:59.263686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.269664   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.269728   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.312201   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.312224   71627 cri.go:89] found id: ""
	I0910 19:03:59.312233   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:59.312288   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.317991   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.318067   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.360625   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.360650   71627 cri.go:89] found id: ""
	I0910 19:03:59.360657   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:59.360707   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.364948   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.365010   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.404075   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.404096   71627 cri.go:89] found id: ""
	I0910 19:03:59.404103   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:59.404149   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.408098   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.408141   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.443767   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.443792   71627 cri.go:89] found id: ""
	I0910 19:03:59.443802   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:59.443858   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.448348   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.448397   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.485373   71627 cri.go:89] found id: ""
	I0910 19:03:59.485401   71627 logs.go:276] 0 containers: []
	W0910 19:03:59.485409   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.485414   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:59.485470   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:59.522641   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.522660   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.522664   71627 cri.go:89] found id: ""
	I0910 19:03:59.522671   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:59.522726   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.527283   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.531256   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:59.531275   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.576358   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:59.576382   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.625938   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:59.625974   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.664362   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:59.664386   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.718655   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:59.718686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.763954   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.763984   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.785217   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:59.785248   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.836560   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:59.836604   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.878973   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:59.879001   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.929851   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:59.929878   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.400346   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.400384   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:00.442281   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:00.442307   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:00.510448   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:00.510480   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:57.665980   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.666054   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:01.668052   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.558561   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.572993   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.573094   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.618957   72122 cri.go:89] found id: ""
	I0910 19:03:59.618988   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.618999   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:59.619008   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.619072   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.662544   72122 cri.go:89] found id: ""
	I0910 19:03:59.662643   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.662661   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:59.662673   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.662750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.704323   72122 cri.go:89] found id: ""
	I0910 19:03:59.704349   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.704360   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:59.704367   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.704426   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.738275   72122 cri.go:89] found id: ""
	I0910 19:03:59.738301   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.738311   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:59.738317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.738367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.778887   72122 cri.go:89] found id: ""
	I0910 19:03:59.778922   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.778934   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:59.778944   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.779010   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.814953   72122 cri.go:89] found id: ""
	I0910 19:03:59.814985   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.814995   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:59.815003   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.815064   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.850016   72122 cri.go:89] found id: ""
	I0910 19:03:59.850048   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.850061   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.850069   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:59.850131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:59.887546   72122 cri.go:89] found id: ""
	I0910 19:03:59.887589   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.887600   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:59.887613   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:59.887632   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:59.938761   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.938784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.954572   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:59.954603   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:04:00.029593   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:04:00.029622   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:00.029638   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.121427   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.121462   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:02.660924   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:02.674661   72122 kubeadm.go:597] duration metric: took 4m3.166175956s to restartPrimaryControlPlane
	W0910 19:04:02.674744   72122 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:04:02.674769   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:04:03.133507   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:03.150426   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:03.161678   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:03.173362   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:03.173389   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:03.173436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:03.183872   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:03.183934   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:03.193891   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:03.203385   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:03.203450   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:03.216255   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.227938   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:03.228001   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.240799   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:03.252871   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:03.252922   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:03.263682   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:03.337478   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:04:03.337564   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:03.506276   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:03.506454   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:03.506587   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:04:03.697062   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:03.698908   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:03.699004   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:03.699083   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:03.699184   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:03.699270   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:03.699361   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:03.699517   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:03.700040   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:03.700773   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:03.701529   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:03.702334   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:03.702627   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:03.702715   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:03.929760   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:03.992724   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:04.087552   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:04.226550   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:04.244695   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:04.246125   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:04.246187   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:04.396099   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:03.107779   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 19:04:03.112394   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 19:04:03.113347   71627 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:03.113367   71627 api_server.go:131] duration metric: took 3.922898577s to wait for apiserver health ...
	I0910 19:04:03.113375   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:03.113399   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:03.113443   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:03.153182   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.153204   71627 cri.go:89] found id: ""
	I0910 19:04:03.153213   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:04:03.153263   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.157842   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:03.157906   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:03.199572   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:03.199594   71627 cri.go:89] found id: ""
	I0910 19:04:03.199604   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:04:03.199658   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.204332   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:03.204409   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:03.252660   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.252686   71627 cri.go:89] found id: ""
	I0910 19:04:03.252696   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:04:03.252751   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.257850   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:03.257915   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:03.300208   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:03.300226   71627 cri.go:89] found id: ""
	I0910 19:04:03.300235   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:04:03.300294   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.304875   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:03.304953   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:03.346705   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.346734   71627 cri.go:89] found id: ""
	I0910 19:04:03.346744   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:04:03.346807   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.351246   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:03.351314   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:03.391218   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.391240   71627 cri.go:89] found id: ""
	I0910 19:04:03.391247   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:04:03.391290   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.396156   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:03.396264   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:03.437436   71627 cri.go:89] found id: ""
	I0910 19:04:03.437464   71627 logs.go:276] 0 containers: []
	W0910 19:04:03.437473   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:03.437479   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:03.437551   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:03.476396   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.476417   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.476420   71627 cri.go:89] found id: ""
	I0910 19:04:03.476427   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:04:03.476481   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.480969   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.485821   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:03.485843   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:03.537042   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:04:03.537079   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.599059   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:04:03.599102   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.637541   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:04:03.637576   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.682203   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:04:03.682234   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.734965   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:04:03.734992   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.769711   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:04:03.769738   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.805970   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:03.805999   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:04.165756   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:04.165796   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:04.254572   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:04.254609   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:04.272637   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:04.272686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:04.421716   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:04:04.421756   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:04.476657   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:04:04.476701   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:07.038592   71627 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:07.038618   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.038624   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.038628   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.038632   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.038636   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.038639   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.038644   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.038651   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.038658   71627 system_pods.go:74] duration metric: took 3.925277367s to wait for pod list to return data ...
	I0910 19:04:07.038667   71627 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:07.040831   71627 default_sa.go:45] found service account: "default"
	I0910 19:04:07.040854   71627 default_sa.go:55] duration metric: took 2.180848ms for default service account to be created ...
	I0910 19:04:07.040864   71627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:07.045130   71627 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:07.045151   71627 system_pods.go:89] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.045157   71627 system_pods.go:89] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.045162   71627 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.045167   71627 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.045171   71627 system_pods.go:89] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.045175   71627 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.045180   71627 system_pods.go:89] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.045184   71627 system_pods.go:89] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.045191   71627 system_pods.go:126] duration metric: took 4.321406ms to wait for k8s-apps to be running ...
	I0910 19:04:07.045200   71627 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:07.045242   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:07.061292   71627 system_svc.go:56] duration metric: took 16.084643ms WaitForService to wait for kubelet
	I0910 19:04:07.061318   71627 kubeadm.go:582] duration metric: took 4m21.980981405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:07.061342   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:07.064260   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:07.064277   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:07.064288   71627 node_conditions.go:105] duration metric: took 2.940712ms to run NodePressure ...
	I0910 19:04:07.064298   71627 start.go:241] waiting for startup goroutines ...
	I0910 19:04:07.064308   71627 start.go:246] waiting for cluster config update ...
	I0910 19:04:07.064318   71627 start.go:255] writing updated cluster config ...
	I0910 19:04:07.064627   71627 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:07.109814   71627 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:07.111804   71627 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-557504" cluster and "default" namespace by default
	I0910 19:04:04.165083   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:06.663618   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:04.397627   72122 out.go:235]   - Booting up control plane ...
	I0910 19:04:04.397763   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:04.405199   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:04.407281   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:04.408182   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:04.411438   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:04:08.667046   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:11.164622   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.461731   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.433698154s)
	I0910 19:04:15.461801   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:15.483515   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:15.497133   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:15.513903   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:15.513924   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:15.513972   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:15.524468   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:15.524529   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:15.534726   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:15.544892   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:15.544944   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:15.554663   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.564884   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:15.564978   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.574280   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:15.583882   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:15.583932   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:15.593971   71529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:15.639220   71529 kubeadm.go:310] W0910 19:04:15.612221    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.641412   71529 kubeadm.go:310] W0910 19:04:15.614470    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.749471   71529 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:04:13.164865   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.165232   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:17.664384   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:19.664943   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:22.166309   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:24.300945   71529 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 19:04:24.301016   71529 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:24.301143   71529 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:24.301274   71529 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:24.301408   71529 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 19:04:24.301517   71529 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:24.302988   71529 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:24.303079   71529 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:24.303132   71529 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:24.303197   71529 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:24.303252   71529 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:24.303315   71529 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:24.303367   71529 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:24.303443   71529 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:24.303517   71529 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:24.303631   71529 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:24.303737   71529 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:24.303792   71529 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:24.303873   71529 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:24.303954   71529 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:24.304037   71529 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:04:24.304120   71529 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:24.304217   71529 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:24.304299   71529 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:24.304423   71529 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:24.304523   71529 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:24.305839   71529 out.go:235]   - Booting up control plane ...
	I0910 19:04:24.305946   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:24.306046   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:24.306123   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:24.306254   71529 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:24.306338   71529 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:24.306387   71529 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:24.306507   71529 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 19:04:24.306608   71529 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:04:24.306679   71529 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.526264ms
	I0910 19:04:24.306748   71529 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 19:04:24.306801   71529 kubeadm.go:310] [api-check] The API server is healthy after 5.501960865s
	I0910 19:04:24.306887   71529 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 19:04:24.306997   71529 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 19:04:24.307045   71529 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 19:04:24.307202   71529 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-347802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 19:04:24.307250   71529 kubeadm.go:310] [bootstrap-token] Using token: 3uw8fx.h3bliquui6tuj5mh
	I0910 19:04:24.308589   71529 out.go:235]   - Configuring RBAC rules ...
	I0910 19:04:24.308728   71529 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 19:04:24.308847   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 19:04:24.309021   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 19:04:24.309197   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 19:04:24.309330   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 19:04:24.309437   71529 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 19:04:24.309612   71529 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 19:04:24.309681   71529 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 19:04:24.309776   71529 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 19:04:24.309787   71529 kubeadm.go:310] 
	I0910 19:04:24.309865   71529 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 19:04:24.309874   71529 kubeadm.go:310] 
	I0910 19:04:24.309951   71529 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 19:04:24.309963   71529 kubeadm.go:310] 
	I0910 19:04:24.309984   71529 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 19:04:24.310033   71529 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 19:04:24.310085   71529 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 19:04:24.310091   71529 kubeadm.go:310] 
	I0910 19:04:24.310152   71529 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 19:04:24.310164   71529 kubeadm.go:310] 
	I0910 19:04:24.310203   71529 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 19:04:24.310214   71529 kubeadm.go:310] 
	I0910 19:04:24.310262   71529 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 19:04:24.310326   71529 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 19:04:24.310383   71529 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 19:04:24.310390   71529 kubeadm.go:310] 
	I0910 19:04:24.310457   71529 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 19:04:24.310525   71529 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 19:04:24.310531   71529 kubeadm.go:310] 
	I0910 19:04:24.310598   71529 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310705   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 19:04:24.310728   71529 kubeadm.go:310] 	--control-plane 
	I0910 19:04:24.310731   71529 kubeadm.go:310] 
	I0910 19:04:24.310806   71529 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 19:04:24.310814   71529 kubeadm.go:310] 
	I0910 19:04:24.310884   71529 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310978   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 19:04:24.310994   71529 cni.go:84] Creating CNI manager for ""
	I0910 19:04:24.311006   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:04:24.312411   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:04:24.313516   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:04:24.326066   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:04:24.346367   71529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:04:24.346446   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.346475   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-347802 minikube.k8s.io/updated_at=2024_09_10T19_04_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=no-preload-347802 minikube.k8s.io/primary=true
	I0910 19:04:24.374396   71529 ops.go:34] apiserver oom_adj: -16
	I0910 19:04:24.561164   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.061938   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.561435   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.062175   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.561899   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:27.061256   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.664345   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:26.666316   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:27.561862   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.061889   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.562200   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.732352   71529 kubeadm.go:1113] duration metric: took 4.385961888s to wait for elevateKubeSystemPrivileges
	I0910 19:04:28.732387   71529 kubeadm.go:394] duration metric: took 5m2.035769941s to StartCluster
	I0910 19:04:28.732410   71529 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.732497   71529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:04:28.735625   71529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.735909   71529 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:04:28.736234   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:04:28.736296   71529 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:04:28.736417   71529 addons.go:69] Setting storage-provisioner=true in profile "no-preload-347802"
	I0910 19:04:28.736445   71529 addons.go:234] Setting addon storage-provisioner=true in "no-preload-347802"
	W0910 19:04:28.736453   71529 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:04:28.736480   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.736667   71529 addons.go:69] Setting default-storageclass=true in profile "no-preload-347802"
	I0910 19:04:28.736674   71529 addons.go:69] Setting metrics-server=true in profile "no-preload-347802"
	I0910 19:04:28.736703   71529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-347802"
	I0910 19:04:28.736717   71529 addons.go:234] Setting addon metrics-server=true in "no-preload-347802"
	W0910 19:04:28.736727   71529 addons.go:243] addon metrics-server should already be in state true
	I0910 19:04:28.736758   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.737346   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737360   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737401   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737709   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737809   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737832   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737891   71529 out.go:177] * Verifying Kubernetes components...
	I0910 19:04:28.739122   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:04:28.755720   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0910 19:04:28.755754   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0910 19:04:28.756110   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0910 19:04:28.756297   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756298   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756688   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756870   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.756891   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757053   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757092   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757426   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757451   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.757637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.757759   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.758328   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.758368   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.760809   71529 addons.go:234] Setting addon default-storageclass=true in "no-preload-347802"
	W0910 19:04:28.760825   71529 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:04:28.760848   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.761254   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.761285   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.761486   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.761994   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.762024   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.775766   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0910 19:04:28.776199   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.776801   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.776824   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.777167   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.777359   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.777651   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0910 19:04:28.778091   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.778678   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.778696   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.779019   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.779215   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.779616   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.780231   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0910 19:04:28.780605   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.780675   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.781156   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.781183   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.781330   71529 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:04:28.781416   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.781810   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.781841   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.782326   71529 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:04:28.782391   71529 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:28.782408   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:04:28.782425   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.783393   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:04:28.783413   71529 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:04:28.783433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.785287   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785763   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.785792   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785948   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.786114   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.786250   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.786397   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.786768   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787101   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.787124   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787330   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.787492   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.787637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.787747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.802599   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0910 19:04:28.802947   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.803402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.803415   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.803711   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.803882   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.805296   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.805498   71529 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:28.805510   71529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:04:28.805523   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.808615   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809041   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.809056   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809333   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.809518   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.809687   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.809792   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.974399   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:04:29.068531   71529 node_ready.go:35] waiting up to 6m0s for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084281   71529 node_ready.go:49] node "no-preload-347802" has status "Ready":"True"
	I0910 19:04:29.084306   71529 node_ready.go:38] duration metric: took 15.737646ms for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084317   71529 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:29.098794   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:29.122272   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:29.132813   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:29.191758   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:04:29.191777   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:04:29.224998   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:04:29.225019   71529 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:04:29.264455   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:29.264489   71529 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:04:29.369504   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:30.199702   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.066859027s)
	I0910 19:04:30.199757   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199769   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.199850   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077541595s)
	I0910 19:04:30.199895   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199909   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200096   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200135   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200147   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200155   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200154   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200174   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200187   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200201   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200209   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200220   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200387   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200402   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200617   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200655   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200680   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.219416   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.219437   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.219697   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.219705   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.219741   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.366927   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.366957   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367264   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367279   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367288   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.367302   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367506   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367520   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367533   71529 addons.go:475] Verifying addon metrics-server=true in "no-preload-347802"
	I0910 19:04:30.369968   71529 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:04:30.371186   71529 addons.go:510] duration metric: took 1.634894777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:04:31.104506   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:29.164993   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:31.668683   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:33.105761   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:35.606200   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:34.164783   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:36.663840   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:38.106188   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:39.106175   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.106199   71529 pod_ready.go:82] duration metric: took 10.007378894s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.106210   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111333   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.111352   71529 pod_ready.go:82] duration metric: took 5.13344ms for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111362   71529 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116673   71529 pod_ready.go:93] pod "etcd-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.116689   71529 pod_ready.go:82] duration metric: took 5.319986ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116697   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125400   71529 pod_ready.go:93] pod "kube-apiserver-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.125422   71529 pod_ready.go:82] duration metric: took 8.717835ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125433   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133790   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.133807   71529 pod_ready.go:82] duration metric: took 8.36626ms for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133818   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504642   71529 pod_ready.go:93] pod "kube-proxy-gwzhs" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.504665   71529 pod_ready.go:82] duration metric: took 370.840119ms for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504675   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903625   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.903646   71529 pod_ready.go:82] duration metric: took 398.964651ms for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903653   71529 pod_ready.go:39] duration metric: took 10.819325885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:39.903666   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:39.903710   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:39.918479   71529 api_server.go:72] duration metric: took 11.182520681s to wait for apiserver process to appear ...
	I0910 19:04:39.918501   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:39.918521   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 19:04:39.922745   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 19:04:39.923681   71529 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:39.923701   71529 api_server.go:131] duration metric: took 5.193102ms to wait for apiserver health ...
	I0910 19:04:39.923710   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:40.106587   71529 system_pods.go:59] 9 kube-system pods found
	I0910 19:04:40.106614   71529 system_pods.go:61] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.106619   71529 system_pods.go:61] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.106623   71529 system_pods.go:61] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.106626   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.106630   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.106633   71529 system_pods.go:61] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.106637   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.106642   71529 system_pods.go:61] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.106646   71529 system_pods.go:61] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.106652   71529 system_pods.go:74] duration metric: took 182.93737ms to wait for pod list to return data ...
	I0910 19:04:40.106662   71529 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:40.303294   71529 default_sa.go:45] found service account: "default"
	I0910 19:04:40.303316   71529 default_sa.go:55] duration metric: took 196.649242ms for default service account to be created ...
	I0910 19:04:40.303324   71529 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:40.506862   71529 system_pods.go:86] 9 kube-system pods found
	I0910 19:04:40.506894   71529 system_pods.go:89] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.506902   71529 system_pods.go:89] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.506908   71529 system_pods.go:89] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.506913   71529 system_pods.go:89] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.506919   71529 system_pods.go:89] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.506925   71529 system_pods.go:89] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.506931   71529 system_pods.go:89] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.506940   71529 system_pods.go:89] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.506949   71529 system_pods.go:89] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.506963   71529 system_pods.go:126] duration metric: took 203.633111ms to wait for k8s-apps to be running ...
	I0910 19:04:40.506974   71529 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:40.507032   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:40.522711   71529 system_svc.go:56] duration metric: took 15.728044ms WaitForService to wait for kubelet
	I0910 19:04:40.522739   71529 kubeadm.go:582] duration metric: took 11.786784927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:40.522761   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:40.702993   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:40.703011   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:40.703020   71529 node_conditions.go:105] duration metric: took 180.253729ms to run NodePressure ...
	I0910 19:04:40.703031   71529 start.go:241] waiting for startup goroutines ...
	I0910 19:04:40.703037   71529 start.go:246] waiting for cluster config update ...
	I0910 19:04:40.703046   71529 start.go:255] writing updated cluster config ...
	I0910 19:04:40.703329   71529 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:40.750434   71529 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:40.752453   71529 out.go:177] * Done! kubectl is now configured to use "no-preload-347802" cluster and "default" namespace by default
	I0910 19:04:37.670616   71183 pod_ready.go:82] duration metric: took 4m0.012645309s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:04:37.670637   71183 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:04:37.670644   71183 pod_ready.go:39] duration metric: took 4m3.614436373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:37.670658   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:37.670693   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:37.670746   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:37.721269   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:37.721295   71183 cri.go:89] found id: ""
	I0910 19:04:37.721303   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:37.721361   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.725648   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:37.725711   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:37.760937   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:37.760967   71183 cri.go:89] found id: ""
	I0910 19:04:37.760978   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:37.761034   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.765181   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:37.765243   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:37.800419   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:37.800447   71183 cri.go:89] found id: ""
	I0910 19:04:37.800457   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:37.800509   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.805255   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:37.805330   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:37.849032   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:37.849055   71183 cri.go:89] found id: ""
	I0910 19:04:37.849064   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:37.849136   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.853148   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:37.853224   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:37.888327   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:37.888352   71183 cri.go:89] found id: ""
	I0910 19:04:37.888361   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:37.888417   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.892721   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:37.892782   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:37.928648   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:37.928671   71183 cri.go:89] found id: ""
	I0910 19:04:37.928679   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:37.928731   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.932746   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:37.932804   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:37.967343   71183 cri.go:89] found id: ""
	I0910 19:04:37.967372   71183 logs.go:276] 0 containers: []
	W0910 19:04:37.967382   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:37.967387   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:37.967435   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:38.004150   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:38.004173   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:38.004176   71183 cri.go:89] found id: ""
	I0910 19:04:38.004183   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:38.004227   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.008118   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.011779   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:38.011799   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:38.026386   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:38.026405   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:38.149296   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:38.149324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:38.200987   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:38.201019   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:38.243953   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:38.243983   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:38.287242   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:38.287272   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:38.329165   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:38.329193   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:38.391117   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:38.391144   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:38.464906   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:38.464944   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:38.979681   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:38.979732   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:39.015604   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:39.015636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:39.055715   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:39.055748   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:39.103920   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:39.103952   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.650354   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:41.667568   71183 api_server.go:72] duration metric: took 4m15.330735169s to wait for apiserver process to appear ...
	I0910 19:04:41.667604   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:41.667636   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:41.667682   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:41.707476   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:41.707507   71183 cri.go:89] found id: ""
	I0910 19:04:41.707520   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:41.707590   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.711732   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:41.711794   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:41.745943   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:41.745963   71183 cri.go:89] found id: ""
	I0910 19:04:41.745972   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:41.746023   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.749930   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:41.749978   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:41.790296   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:41.790318   71183 cri.go:89] found id: ""
	I0910 19:04:41.790327   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:41.790388   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.794933   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:41.794988   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:41.840669   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:41.840695   71183 cri.go:89] found id: ""
	I0910 19:04:41.840704   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:41.840762   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.845674   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:41.845729   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:41.891686   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.891708   71183 cri.go:89] found id: ""
	I0910 19:04:41.891717   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:41.891774   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.896435   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:41.896486   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:41.935802   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:41.935829   71183 cri.go:89] found id: ""
	I0910 19:04:41.935838   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:41.935882   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.940924   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:41.940979   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:41.980326   71183 cri.go:89] found id: ""
	I0910 19:04:41.980349   71183 logs.go:276] 0 containers: []
	W0910 19:04:41.980357   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:41.980362   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:41.980409   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:42.021683   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.021701   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.021704   71183 cri.go:89] found id: ""
	I0910 19:04:42.021711   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:42.021760   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.025986   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.029896   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:42.029919   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:42.101147   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:42.101182   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:42.115299   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:42.115324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:42.230472   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:42.230503   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:42.285314   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:42.285341   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:42.338243   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:42.338283   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:42.380609   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:42.380636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:42.424255   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:42.424290   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:42.481943   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:42.481972   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:42.525590   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:42.525613   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.566519   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:42.566546   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.601221   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:42.601256   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:43.021780   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:43.021816   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:45.569149   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:04:45.575146   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:04:45.576058   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:45.576077   71183 api_server.go:131] duration metric: took 3.908465286s to wait for apiserver health ...
	I0910 19:04:45.576088   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:45.576112   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:45.576159   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:45.631224   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:45.631246   71183 cri.go:89] found id: ""
	I0910 19:04:45.631254   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:45.631310   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.636343   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:45.636408   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:45.675538   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:45.675558   71183 cri.go:89] found id: ""
	I0910 19:04:45.675565   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:45.675620   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.679865   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:45.679921   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:45.724808   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:45.724835   71183 cri.go:89] found id: ""
	I0910 19:04:45.724844   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:45.724898   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.729083   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:45.729141   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:45.762943   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:45.762965   71183 cri.go:89] found id: ""
	I0910 19:04:45.762973   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:45.763022   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.766889   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:45.766935   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:45.802849   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:45.802875   71183 cri.go:89] found id: ""
	I0910 19:04:45.802883   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:45.802924   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.806796   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:45.806860   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:45.841656   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:45.841675   71183 cri.go:89] found id: ""
	I0910 19:04:45.841682   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:45.841722   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.846078   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:45.846145   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:45.883750   71183 cri.go:89] found id: ""
	I0910 19:04:45.883773   71183 logs.go:276] 0 containers: []
	W0910 19:04:45.883787   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:45.883795   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:45.883857   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:45.918786   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:45.918815   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.918822   71183 cri.go:89] found id: ""
	I0910 19:04:45.918829   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:45.918876   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.923329   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.927395   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:45.927417   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.963527   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:45.963557   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:46.364843   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:46.364886   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:46.379339   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:46.379366   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:46.483159   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:46.483190   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:46.523850   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:46.523877   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:46.574864   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:46.574905   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:46.613765   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:46.613793   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:46.659791   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:46.659819   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:46.722103   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:46.722138   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:46.794098   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:46.794140   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:46.850112   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:46.850148   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:46.899733   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:46.899770   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:44.413134   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:04:44.413215   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:44.413400   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:49.448164   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:49.448194   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.448201   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.448207   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.448216   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.448220   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.448225   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.448232   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.448239   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.448248   71183 system_pods.go:74] duration metric: took 3.872154051s to wait for pod list to return data ...
	I0910 19:04:49.448255   71183 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:49.450795   71183 default_sa.go:45] found service account: "default"
	I0910 19:04:49.450816   71183 default_sa.go:55] duration metric: took 2.553358ms for default service account to be created ...
	I0910 19:04:49.450826   71183 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:49.454993   71183 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:49.455015   71183 system_pods.go:89] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.455020   71183 system_pods.go:89] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.455024   71183 system_pods.go:89] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.455030   71183 system_pods.go:89] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.455033   71183 system_pods.go:89] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.455038   71183 system_pods.go:89] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.455047   71183 system_pods.go:89] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.455053   71183 system_pods.go:89] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.455062   71183 system_pods.go:126] duration metric: took 4.230457ms to wait for k8s-apps to be running ...
	I0910 19:04:49.455073   71183 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:49.455130   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:49.471265   71183 system_svc.go:56] duration metric: took 16.184718ms WaitForService to wait for kubelet
	I0910 19:04:49.471293   71183 kubeadm.go:582] duration metric: took 4m23.134472506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:49.471320   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:49.475529   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:49.475548   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:49.475558   71183 node_conditions.go:105] duration metric: took 4.228611ms to run NodePressure ...
	I0910 19:04:49.475567   71183 start.go:241] waiting for startup goroutines ...
	I0910 19:04:49.475577   71183 start.go:246] waiting for cluster config update ...
	I0910 19:04:49.475589   71183 start.go:255] writing updated cluster config ...
	I0910 19:04:49.475827   71183 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:49.522354   71183 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:49.524738   71183 out.go:177] * Done! kubectl is now configured to use "embed-certs-836868" cluster and "default" namespace by default
	I0910 19:04:49.413796   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:49.413967   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:59.414341   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:59.414514   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:19.415680   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:19.415950   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.417770   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:59.418015   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.418035   72122 kubeadm.go:310] 
	I0910 19:05:59.418101   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:05:59.418137   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:05:59.418143   72122 kubeadm.go:310] 
	I0910 19:05:59.418178   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:05:59.418207   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:05:59.418313   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:05:59.418321   72122 kubeadm.go:310] 
	I0910 19:05:59.418443   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:05:59.418477   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:05:59.418519   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:05:59.418527   72122 kubeadm.go:310] 
	I0910 19:05:59.418625   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:05:59.418723   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:05:59.418731   72122 kubeadm.go:310] 
	I0910 19:05:59.418869   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:05:59.418976   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:05:59.419045   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:05:59.419141   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:05:59.419152   72122 kubeadm.go:310] 
	I0910 19:05:59.420015   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:05:59.420093   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:05:59.420165   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0910 19:05:59.420289   72122 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0910 19:05:59.420339   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:06:04.848652   72122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.428289133s)
	I0910 19:06:04.848719   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:06:04.862914   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:06:04.872903   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:06:04.872920   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:06:04.872960   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:06:04.882109   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:06:04.882168   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:06:04.890962   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:06:04.899925   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:06:04.899985   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:06:04.908796   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.917123   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:06:04.917173   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.925821   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:06:04.937885   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:06:04.937963   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:06:04.948108   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:06:05.019246   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:06:05.019321   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:06:05.162639   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:06:05.162770   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:06:05.162918   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:06:05.343270   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:06:05.345092   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:06:05.345189   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:06:05.345299   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:06:05.345417   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:06:05.345497   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:06:05.345606   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:06:05.345718   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:06:05.345981   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:06:05.346367   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:06:05.346822   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:06:05.347133   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:06:05.347246   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:06:05.347346   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:06:05.536681   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:06:05.773929   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:06:05.994857   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:06:06.139145   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:06:06.154510   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:06:06.155479   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:06:06.155548   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:06:06.311520   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:06:06.314167   72122 out.go:235]   - Booting up control plane ...
	I0910 19:06:06.314311   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:06:06.320856   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:06:06.321801   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:06:06.322508   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:06:06.324744   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:06:46.327168   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:06:46.327286   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:46.327534   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:06:51.328423   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:51.328643   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:01.329028   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:01.329315   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:21.329371   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:21.329627   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328238   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:08:01.328535   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328566   72122 kubeadm.go:310] 
	I0910 19:08:01.328625   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:08:01.328688   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:08:01.328701   72122 kubeadm.go:310] 
	I0910 19:08:01.328749   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:08:01.328797   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:08:01.328941   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:08:01.328953   72122 kubeadm.go:310] 
	I0910 19:08:01.329068   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:08:01.329136   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:08:01.329177   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:08:01.329191   72122 kubeadm.go:310] 
	I0910 19:08:01.329310   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:08:01.329377   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:08:01.329383   72122 kubeadm.go:310] 
	I0910 19:08:01.329468   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:08:01.329539   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:08:01.329607   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:08:01.329667   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:08:01.329674   72122 kubeadm.go:310] 
	I0910 19:08:01.330783   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:08:01.330892   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:08:01.330963   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 19:08:01.331020   72122 kubeadm.go:394] duration metric: took 8m1.874926868s to StartCluster
	I0910 19:08:01.331061   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:08:01.331117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:08:01.385468   72122 cri.go:89] found id: ""
	I0910 19:08:01.385492   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.385499   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:08:01.385505   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:08:01.385571   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:08:01.424028   72122 cri.go:89] found id: ""
	I0910 19:08:01.424051   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.424060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:08:01.424064   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:08:01.424121   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:08:01.462946   72122 cri.go:89] found id: ""
	I0910 19:08:01.462973   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.462983   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:08:01.462991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:08:01.463045   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:08:01.498242   72122 cri.go:89] found id: ""
	I0910 19:08:01.498269   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.498278   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:08:01.498283   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:08:01.498329   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:08:01.532917   72122 cri.go:89] found id: ""
	I0910 19:08:01.532946   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.532953   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:08:01.532959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:08:01.533011   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:08:01.567935   72122 cri.go:89] found id: ""
	I0910 19:08:01.567959   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.567967   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:08:01.567973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:08:01.568027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:08:01.601393   72122 cri.go:89] found id: ""
	I0910 19:08:01.601418   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.601426   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:08:01.601432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:08:01.601489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:08:01.639307   72122 cri.go:89] found id: ""
	I0910 19:08:01.639335   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.639345   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:08:01.639358   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:08:01.639373   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:08:01.726566   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:08:01.726591   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:08:01.726614   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:08:01.839965   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:08:01.840004   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:08:01.879658   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:08:01.879687   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:08:01.939066   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:08:01.939102   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0910 19:08:01.955390   72122 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0910 19:08:01.955436   72122 out.go:270] * 
	W0910 19:08:01.955500   72122 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.955524   72122 out.go:270] * 
	W0910 19:08:01.956343   72122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 19:08:01.959608   72122 out.go:201] 
	W0910 19:08:01.960877   72122 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.960929   72122 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0910 19:08:01.960957   72122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0910 19:08:01.962345   72122 out.go:201] 
	
	
	==> CRI-O <==
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.897978783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995283897956338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fa0cec1-2969-4b8d-9abb-ac1d144b5a51 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.898653409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cfff077-437d-4a5b-9e30-0c6c78e21b81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.898709307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cfff077-437d-4a5b-9e30-0c6c78e21b81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.898740164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2cfff077-437d-4a5b-9e30-0c6c78e21b81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.934116185Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ea2af6e-1284-4407-b500-ff488c249ffe name=/runtime.v1.RuntimeService/Version
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.934232759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ea2af6e-1284-4407-b500-ff488c249ffe name=/runtime.v1.RuntimeService/Version
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.935436218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3fcc5af-5b69-474a-bd17-403180a5eacd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.935793840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995283935770537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3fcc5af-5b69-474a-bd17-403180a5eacd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.936421662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30efa89a-0a4a-4a28-8766-619aa511fd27 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.936477884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30efa89a-0a4a-4a28-8766-619aa511fd27 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.936511360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=30efa89a-0a4a-4a28-8766-619aa511fd27 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.971025876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b224354-5e1f-49b2-9587-9c8c2800f46c name=/runtime.v1.RuntimeService/Version
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.971123718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b224354-5e1f-49b2-9587-9c8c2800f46c name=/runtime.v1.RuntimeService/Version
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.972392027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=800434be-8e80-45f5-960e-ddae7f9fecc6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.972909810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995283972869187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=800434be-8e80-45f5-960e-ddae7f9fecc6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.973463316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=918f837b-cb59-4b8f-804a-6d27c241a1f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.973529980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=918f837b-cb59-4b8f-804a-6d27c241a1f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:03 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:03.973579047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=918f837b-cb59-4b8f-804a-6d27c241a1f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:04 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:04.010889439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03ad994f-7f24-4384-a991-944e289d3afe name=/runtime.v1.RuntimeService/Version
	Sep 10 19:08:04 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:04.010993338Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03ad994f-7f24-4384-a991-944e289d3afe name=/runtime.v1.RuntimeService/Version
	Sep 10 19:08:04 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:04.012504248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1dc5de9b-cf60-4dc4-9e59-0f4f9c17c437 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:08:04 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:04.012997834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995284012972950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1dc5de9b-cf60-4dc4-9e59-0f4f9c17c437 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:08:04 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:04.013815346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41b427d7-f027-4fe3-b08a-27b2f936ea98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:04 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:04.013878711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41b427d7-f027-4fe3-b08a-27b2f936ea98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:08:04 old-k8s-version-432422 crio[642]: time="2024-09-10 19:08:04.013929843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=41b427d7-f027-4fe3-b08a-27b2f936ea98 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep10 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058119] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044186] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.255058] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.413650] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.079518] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.057884] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065532] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.191553] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.154429] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.265022] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +6.430445] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.070012] kauditd_printk_skb: 130 callbacks suppressed
	[Sep10 19:00] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
	[ +11.862536] kauditd_printk_skb: 46 callbacks suppressed
	[Sep10 19:04] systemd-fstab-generator[5076]: Ignoring "noauto" option for root device
	[Sep10 19:06] systemd-fstab-generator[5359]: Ignoring "noauto" option for root device
	[  +0.066075] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:08:04 up 8 min,  0 users,  load average: 0.04, 0.13, 0.08
	Linux old-k8s-version-432422 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]:         /usr/local/go/src/net/net.go:182 +0x8e
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]: bufio.(*Reader).Read(0xc0001c51a0, 0xc000254ff8, 0x9, 0x9, 0x4f16701, 0x1000500c8, 0x2f)
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]:         /usr/local/go/src/bufio/bufio.go:227 +0x222
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]: io.ReadAtLeast(0x4f04880, 0xc0001c51a0, 0xc000254ff8, 0x9, 0x9, 0x9, 0xc0008fdb28, 0xc00092a420, 0x2f)
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]:         /usr/local/go/src/io/io.go:314 +0x87
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]: io.ReadFull(...)
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]:         /usr/local/go/src/io/io.go:333
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000254ff8, 0x9, 0x9, 0x4f04880, 0xc0001c51a0, 0x0, 0x0, 0x0, 0x0)
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000254fc0, 0xc00091dc50, 0xc00091dc50, 0x0, 0x0)
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0007ed6c0)
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Sep 10 19:08:01 old-k8s-version-432422 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 10 19:08:01 old-k8s-version-432422 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 10 19:08:01 old-k8s-version-432422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 10 19:08:01 old-k8s-version-432422 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 10 19:08:01 old-k8s-version-432422 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5586]: I0910 19:08:01.804866    5586 server.go:416] Version: v1.20.0
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5586]: I0910 19:08:01.805236    5586 server.go:837] Client rotation is on, will bootstrap in background
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5586]: I0910 19:08:01.807117    5586 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5586]: I0910 19:08:01.808301    5586 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 10 19:08:01 old-k8s-version-432422 kubelet[5586]: W0910 19:08:01.808309    5586 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 2 (234.587749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-432422" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (723.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0910 19:04:23.979355   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:04:38.788532   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-10 19:13:07.641944091 +0000 UTC m=+6247.465718855
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-557504 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-557504 logs -n 25: (2.040427937s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-642043 sudo cat                              | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo find                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo crio                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-642043                                       | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-186737 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | disable-driver-mounts-186737                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-836868            | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-347802             | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-557504  | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-836868                 | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-432422        | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-347802                  | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-557504       | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-432422             | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:56:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:56:02.487676   72122 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:56:02.487789   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487799   72122 out.go:358] Setting ErrFile to fd 2...
	I0910 18:56:02.487804   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487953   72122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:56:02.488491   72122 out.go:352] Setting JSON to false
	I0910 18:56:02.489572   72122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5914,"bootTime":1725988648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:56:02.489637   72122 start.go:139] virtualization: kvm guest
	I0910 18:56:02.491991   72122 out.go:177] * [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:56:02.493117   72122 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:56:02.493113   72122 notify.go:220] Checking for updates...
	I0910 18:56:02.494213   72122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:56:02.495356   72122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:56:02.496370   72122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:56:02.497440   72122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:56:02.498703   72122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:56:02.500450   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:56:02.501100   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.501150   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.515836   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0910 18:56:02.516286   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.516787   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.516815   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.517116   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.517300   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.519092   72122 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 18:56:02.520121   72122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:56:02.520405   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.520436   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.534860   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I0910 18:56:02.535243   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.535688   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.535711   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.536004   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.536215   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.570682   72122 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:56:02.571710   72122 start.go:297] selected driver: kvm2
	I0910 18:56:02.571722   72122 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.571821   72122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:56:02.572465   72122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.572528   72122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:56:02.587001   72122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:56:02.587381   72122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:56:02.587417   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:56:02.587427   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:56:02.587471   72122 start.go:340] cluster config:
	{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.587599   72122 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.589116   72122 out.go:177] * Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	I0910 18:56:02.590155   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:56:02.590185   72122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:56:02.590194   72122 cache.go:56] Caching tarball of preloaded images
	I0910 18:56:02.590294   72122 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:56:02.590313   72122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0910 18:56:02.590415   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:56:02.590612   72122 start.go:360] acquireMachinesLock for old-k8s-version-432422: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:56:08.097313   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:11.169360   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:17.249255   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:20.321326   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:26.401359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:29.473351   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:35.553474   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:38.625322   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:44.705324   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:47.777408   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:53.857373   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:56.929356   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:03.009354   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:06.081346   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:12.161342   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:15.233363   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:21.313385   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:24.385281   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:30.465347   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:33.537368   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:39.617395   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:42.689359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:48.769334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:51.841388   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:57.921359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:00.993375   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:07.073343   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:10.145433   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:16.225336   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:19.297345   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:25.377291   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:28.449365   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:34.529306   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:37.601300   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:43.681334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:46.753328   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:49.757234   71529 start.go:364] duration metric: took 4m17.481092907s to acquireMachinesLock for "no-preload-347802"
	I0910 18:58:49.757299   71529 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:58:49.757316   71529 fix.go:54] fixHost starting: 
	I0910 18:58:49.757667   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:58:49.757694   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:58:49.772681   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0910 18:58:49.773067   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:58:49.773498   71529 main.go:141] libmachine: Using API Version  1
	I0910 18:58:49.773518   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:58:49.773963   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:58:49.774127   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:58:49.774279   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 18:58:49.775704   71529 fix.go:112] recreateIfNeeded on no-preload-347802: state=Stopped err=<nil>
	I0910 18:58:49.775726   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	W0910 18:58:49.775886   71529 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:58:49.777669   71529 out.go:177] * Restarting existing kvm2 VM for "no-preload-347802" ...
	I0910 18:58:49.778739   71529 main.go:141] libmachine: (no-preload-347802) Calling .Start
	I0910 18:58:49.778882   71529 main.go:141] libmachine: (no-preload-347802) Ensuring networks are active...
	I0910 18:58:49.779509   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network default is active
	I0910 18:58:49.779824   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network mk-no-preload-347802 is active
	I0910 18:58:49.780121   71529 main.go:141] libmachine: (no-preload-347802) Getting domain xml...
	I0910 18:58:49.780766   71529 main.go:141] libmachine: (no-preload-347802) Creating domain...
	I0910 18:58:50.967405   71529 main.go:141] libmachine: (no-preload-347802) Waiting to get IP...
	I0910 18:58:50.968284   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:50.968647   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:50.968726   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:50.968628   72707 retry.go:31] will retry after 197.094328ms: waiting for machine to come up
	I0910 18:58:51.167237   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.167630   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.167683   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.167603   72707 retry.go:31] will retry after 272.376855ms: waiting for machine to come up
	I0910 18:58:51.441212   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.441673   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.441698   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.441636   72707 retry.go:31] will retry after 458.172114ms: waiting for machine to come up
	I0910 18:58:51.900991   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.901464   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.901498   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.901428   72707 retry.go:31] will retry after 442.42629ms: waiting for machine to come up
	I0910 18:58:49.754913   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:58:49.754977   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755310   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 18:58:49.755335   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755513   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 18:58:49.757052   71183 machine.go:96] duration metric: took 4m37.423474417s to provisionDockerMachine
	I0910 18:58:49.757138   71183 fix.go:56] duration metric: took 4m37.44458491s for fixHost
	I0910 18:58:49.757149   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 4m37.444613055s
	W0910 18:58:49.757173   71183 start.go:714] error starting host: provision: host is not running
	W0910 18:58:49.757263   71183 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0910 18:58:49.757273   71183 start.go:729] Will try again in 5 seconds ...
	I0910 18:58:52.345053   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:52.345519   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:52.345540   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:52.345463   72707 retry.go:31] will retry after 732.353971ms: waiting for machine to come up
	I0910 18:58:53.079229   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.079686   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.079714   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.079638   72707 retry.go:31] will retry after 658.057224ms: waiting for machine to come up
	I0910 18:58:53.739313   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.739750   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.739811   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.739732   72707 retry.go:31] will retry after 910.559952ms: waiting for machine to come up
	I0910 18:58:54.651714   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:54.652075   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:54.652099   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:54.652027   72707 retry.go:31] will retry after 1.410431493s: waiting for machine to come up
	I0910 18:58:56.063996   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:56.064396   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:56.064418   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:56.064360   72707 retry.go:31] will retry after 1.795467467s: waiting for machine to come up
	I0910 18:58:54.759533   71183 start.go:360] acquireMachinesLock for embed-certs-836868: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:58:57.862130   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:57.862484   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:57.862509   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:57.862453   72707 retry.go:31] will retry after 1.450403908s: waiting for machine to come up
	I0910 18:58:59.315197   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:59.315621   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:59.315657   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:59.315566   72707 retry.go:31] will retry after 1.81005281s: waiting for machine to come up
	I0910 18:59:01.128164   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:01.128611   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:01.128642   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:01.128563   72707 retry.go:31] will retry after 3.333505805s: waiting for machine to come up
	I0910 18:59:04.464526   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:04.465004   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:04.465030   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:04.464951   72707 retry.go:31] will retry after 3.603817331s: waiting for machine to come up
	I0910 18:59:09.257584   71627 start.go:364] duration metric: took 4m27.770499275s to acquireMachinesLock for "default-k8s-diff-port-557504"
	I0910 18:59:09.257656   71627 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:09.257673   71627 fix.go:54] fixHost starting: 
	I0910 18:59:09.258100   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:09.258144   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:09.276230   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0910 18:59:09.276622   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:09.277129   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:09.277151   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:09.277489   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:09.277663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:09.277793   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:09.279006   71627 fix.go:112] recreateIfNeeded on default-k8s-diff-port-557504: state=Stopped err=<nil>
	I0910 18:59:09.279043   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	W0910 18:59:09.279178   71627 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:09.281106   71627 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-557504" ...
	I0910 18:59:08.073057   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073638   71529 main.go:141] libmachine: (no-preload-347802) Found IP for machine: 192.168.50.138
	I0910 18:59:08.073660   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has current primary IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073666   71529 main.go:141] libmachine: (no-preload-347802) Reserving static IP address...
	I0910 18:59:08.074129   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.074153   71529 main.go:141] libmachine: (no-preload-347802) Reserved static IP address: 192.168.50.138
	I0910 18:59:08.074170   71529 main.go:141] libmachine: (no-preload-347802) DBG | skip adding static IP to network mk-no-preload-347802 - found existing host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"}
	I0910 18:59:08.074179   71529 main.go:141] libmachine: (no-preload-347802) Waiting for SSH to be available...
	I0910 18:59:08.074187   71529 main.go:141] libmachine: (no-preload-347802) DBG | Getting to WaitForSSH function...
	I0910 18:59:08.076434   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076744   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.076767   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076928   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH client type: external
	I0910 18:59:08.076950   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa (-rw-------)
	I0910 18:59:08.076979   71529 main.go:141] libmachine: (no-preload-347802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:08.076992   71529 main.go:141] libmachine: (no-preload-347802) DBG | About to run SSH command:
	I0910 18:59:08.077029   71529 main.go:141] libmachine: (no-preload-347802) DBG | exit 0
	I0910 18:59:08.201181   71529 main.go:141] libmachine: (no-preload-347802) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:08.201561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetConfigRaw
	I0910 18:59:08.202195   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.204390   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204639   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.204676   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204932   71529 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/config.json ...
	I0910 18:59:08.205227   71529 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:08.205245   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:08.205464   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.207451   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207833   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.207862   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207956   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.208120   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208402   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.208584   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.208811   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.208826   71529 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:08.317392   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:08.317421   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317693   71529 buildroot.go:166] provisioning hostname "no-preload-347802"
	I0910 18:59:08.317721   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317870   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.320440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320749   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.320777   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320922   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.321092   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321295   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.321607   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.321764   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.321778   71529 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-347802 && echo "no-preload-347802" | sudo tee /etc/hostname
	I0910 18:59:08.442907   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-347802
	
	I0910 18:59:08.442932   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.445449   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445743   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.445769   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445930   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.446135   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446308   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446461   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.446642   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.446831   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.446853   71529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-347802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-347802/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-347802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:08.561710   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:08.561738   71529 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:08.561760   71529 buildroot.go:174] setting up certificates
	I0910 18:59:08.561771   71529 provision.go:84] configureAuth start
	I0910 18:59:08.561782   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.562065   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.564917   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565296   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.565318   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565468   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.567579   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567883   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.567909   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567998   71529 provision.go:143] copyHostCerts
	I0910 18:59:08.568062   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:08.568074   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:08.568155   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:08.568259   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:08.568269   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:08.568297   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:08.568362   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:08.568369   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:08.568398   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:08.568457   71529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.no-preload-347802 san=[127.0.0.1 192.168.50.138 localhost minikube no-preload-347802]
	I0910 18:59:08.635212   71529 provision.go:177] copyRemoteCerts
	I0910 18:59:08.635296   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:08.635321   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.637851   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638202   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.638227   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638392   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.638561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.638727   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.638850   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:08.723477   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:08.747854   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 18:59:08.770184   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:08.792105   71529 provision.go:87] duration metric: took 230.324534ms to configureAuth
	I0910 18:59:08.792125   71529 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:08.792306   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:08.792389   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.795139   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795414   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.795440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795580   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.795767   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.795931   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.796075   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.796201   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.796385   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.796404   71529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:09.021498   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:09.021530   71529 machine.go:96] duration metric: took 816.290576ms to provisionDockerMachine
	I0910 18:59:09.021540   71529 start.go:293] postStartSetup for "no-preload-347802" (driver="kvm2")
	I0910 18:59:09.021566   71529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:09.021587   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.021923   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:09.021951   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.024598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.024935   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.024965   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.025210   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.025416   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.025598   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.025747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.107986   71529 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:09.111947   71529 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:09.111967   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:09.112028   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:09.112098   71529 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:09.112184   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:09.121734   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:09.144116   71529 start.go:296] duration metric: took 122.562738ms for postStartSetup
	I0910 18:59:09.144159   71529 fix.go:56] duration metric: took 19.386851685s for fixHost
	I0910 18:59:09.144183   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.146816   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147237   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.147278   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147396   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.147583   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147754   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147886   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.148060   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:09.148274   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:09.148285   71529 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:09.257433   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994749.232014074
	
	I0910 18:59:09.257456   71529 fix.go:216] guest clock: 1725994749.232014074
	I0910 18:59:09.257463   71529 fix.go:229] Guest: 2024-09-10 18:59:09.232014074 +0000 UTC Remote: 2024-09-10 18:59:09.144164668 +0000 UTC m=+277.006797443 (delta=87.849406ms)
	I0910 18:59:09.257478   71529 fix.go:200] guest clock delta is within tolerance: 87.849406ms
	I0910 18:59:09.257491   71529 start.go:83] releasing machines lock for "no-preload-347802", held for 19.50021281s
	I0910 18:59:09.257522   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.257777   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:09.260357   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260690   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.260715   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260895   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261369   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261545   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261631   71529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:09.261681   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.261749   71529 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:09.261774   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.264296   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264630   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.264650   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264907   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.264992   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.265020   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.265067   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265189   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.265266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265342   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265400   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.265470   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265602   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.367236   71529 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:09.373255   71529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:09.513271   71529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:09.519091   71529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:09.519153   71529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:09.534617   71529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:09.534639   71529 start.go:495] detecting cgroup driver to use...
	I0910 18:59:09.534698   71529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:09.551186   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:09.565123   71529 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:09.565193   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:09.578892   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:09.592571   71529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:09.700953   71529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:09.831175   71529 docker.go:233] disabling docker service ...
	I0910 18:59:09.831245   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:09.845755   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:09.858961   71529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:10.008707   71529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:10.144588   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:10.158486   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:10.176399   71529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:10.176456   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.186448   71529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:10.186511   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.196600   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.206639   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.216913   71529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:10.227030   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.237962   71529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.255181   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.265618   71529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:10.275659   71529 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:10.275713   71529 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:10.288712   71529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:10.301886   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:10.415847   71529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:10.500738   71529 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:10.500829   71529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:10.506564   71529 start.go:563] Will wait 60s for crictl version
	I0910 18:59:10.506620   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.510639   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:10.553929   71529 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:10.554034   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.582508   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.622516   71529 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:09.282182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Start
	I0910 18:59:09.282345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring networks are active...
	I0910 18:59:09.282958   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network default is active
	I0910 18:59:09.283450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network mk-default-k8s-diff-port-557504 is active
	I0910 18:59:09.283810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Getting domain xml...
	I0910 18:59:09.284454   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Creating domain...
	I0910 18:59:10.513168   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting to get IP...
	I0910 18:59:10.514173   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514681   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.514587   72843 retry.go:31] will retry after 228.672382ms: waiting for machine to come up
	I0910 18:59:10.745046   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745508   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.745440   72843 retry.go:31] will retry after 329.196616ms: waiting for machine to come up
	I0910 18:59:11.075777   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076237   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076269   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.076188   72843 retry.go:31] will retry after 317.98463ms: waiting for machine to come up
	I0910 18:59:10.623864   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:10.626709   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627042   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:10.627084   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627336   71529 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:10.631579   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:10.644077   71529 kubeadm.go:883] updating cluster {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:10.644183   71529 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:10.644215   71529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:10.679225   71529 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:10.679247   71529 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:10.679332   71529 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.679346   71529 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.679384   71529 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0910 18:59:10.679395   71529 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.679472   71529 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.679336   71529 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.681147   71529 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.681183   71529 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.681196   71529 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.681189   71529 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.681232   71529 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.681304   71529 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.841312   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.848638   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.872351   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.875581   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.882457   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.894360   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0910 18:59:10.895305   71529 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0910 18:59:10.895341   71529 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.895379   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.898460   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.953614   71529 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0910 18:59:10.953659   71529 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.953706   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042770   71529 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0910 18:59:11.042837   71529 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0910 18:59:11.042862   71529 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.042873   71529 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042820   71529 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0910 18:59:11.043065   71529 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.043097   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.129993   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.130090   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.130018   71529 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0910 18:59:11.130143   71529 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.130187   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.130189   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.130206   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.130271   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.239573   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.239626   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.241780   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.241795   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.241853   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.241883   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.360008   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.360027   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.360067   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.371623   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.480504   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0910 18:59:11.480591   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.480615   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.480635   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0910 18:59:11.480725   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:11.488248   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.510860   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0910 18:59:11.510950   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0910 18:59:11.510959   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:11.511032   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:11.514065   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0910 18:59:11.514136   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:11.555358   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0910 18:59:11.555425   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0910 18:59:11.555445   71529 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555465   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:11.555491   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555497   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0910 18:59:11.578210   71529 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0910 18:59:11.578227   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0910 18:59:11.578258   71529 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.578273   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0910 18:59:11.578306   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.578345   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0910 18:59:11.578310   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0910 18:59:11.395907   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396361   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396389   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.396320   72843 retry.go:31] will retry after 511.273215ms: waiting for machine to come up
	I0910 18:59:11.909582   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910012   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910041   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.909957   72843 retry.go:31] will retry after 712.801984ms: waiting for machine to come up
	I0910 18:59:12.624608   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625042   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625083   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:12.625014   72843 retry.go:31] will retry after 873.57855ms: waiting for machine to come up
	I0910 18:59:13.499767   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500117   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500144   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:13.500071   72843 retry.go:31] will retry after 1.180667971s: waiting for machine to come up
	I0910 18:59:14.682848   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683351   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683381   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:14.683297   72843 retry.go:31] will retry after 1.211684184s: waiting for machine to come up
	I0910 18:59:15.896172   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896651   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:15.896597   72843 retry.go:31] will retry after 1.541313035s: waiting for machine to come up
	I0910 18:59:13.534642   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978971061s)
	I0910 18:59:13.534680   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0910 18:59:13.534686   71529 ssh_runner.go:235] Completed: which crictl: (1.956359959s)
	I0910 18:59:13.534704   71529 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.534753   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:13.534754   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.580670   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.439293   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439652   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:17.439607   72843 retry.go:31] will retry after 2.232253017s: waiting for machine to come up
	I0910 18:59:19.673727   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:19.674070   72843 retry.go:31] will retry after 2.324233118s: waiting for machine to come up
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.644871938s)
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690724664s)
	I0910 18:59:17.225647   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0910 18:59:17.225671   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.225676   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:17.225702   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:19.705947   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.48021773s)
	I0910 18:59:19.705982   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0910 18:59:19.706006   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706045   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.480359026s)
	I0910 18:59:19.706069   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706098   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 18:59:19.706176   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:21.666588   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960494926s)
	I0910 18:59:21.666623   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0910 18:59:21.666640   71529 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.960446302s)
	I0910 18:59:21.666648   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:21.666666   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0910 18:59:21.666699   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:22.000591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001014   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001047   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:22.000951   72843 retry.go:31] will retry after 3.327224401s: waiting for machine to come up
	I0910 18:59:25.329967   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330414   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330445   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:25.330367   72843 retry.go:31] will retry after 3.45596573s: waiting for machine to come up
	I0910 18:59:23.216195   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.549470753s)
	I0910 18:59:23.216223   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0910 18:59:23.216243   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:23.216286   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:25.077483   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.861176975s)
	I0910 18:59:25.077515   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0910 18:59:25.077547   71529 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.077640   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.919427   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0910 18:59:25.919478   71529 cache_images.go:123] Successfully loaded all cached images
	I0910 18:59:25.919486   71529 cache_images.go:92] duration metric: took 15.240223152s to LoadCachedImages
	I0910 18:59:25.919502   71529 kubeadm.go:934] updating node { 192.168.50.138 8443 v1.31.0 crio true true} ...
	I0910 18:59:25.919622   71529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-347802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:25.919710   71529 ssh_runner.go:195] Run: crio config
	I0910 18:59:25.964461   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:25.964489   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:25.964509   71529 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:25.964535   71529 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-347802 NodeName:no-preload-347802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:25.964698   71529 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-347802"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:25.964780   71529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:25.975304   71529 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:25.975371   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:25.985124   71529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 18:59:26.003355   71529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:26.020117   71529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0910 18:59:26.037026   71529 ssh_runner.go:195] Run: grep 192.168.50.138	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:26.041140   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:26.053643   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:26.175281   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:26.193153   71529 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802 for IP: 192.168.50.138
	I0910 18:59:26.193181   71529 certs.go:194] generating shared ca certs ...
	I0910 18:59:26.193203   71529 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:26.193398   71529 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:26.193452   71529 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:26.193466   71529 certs.go:256] generating profile certs ...
	I0910 18:59:26.193582   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/client.key
	I0910 18:59:26.193664   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key.93ff3787
	I0910 18:59:26.193722   71529 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key
	I0910 18:59:26.193871   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:26.193924   71529 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:26.193978   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:26.194026   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:26.194053   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:26.194083   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:26.194132   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:26.194868   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:26.231957   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:26.280213   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:26.310722   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:26.347855   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:59:26.386495   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:26.411742   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:26.435728   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:59:26.460305   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:26.484974   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:26.508782   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:26.531397   71529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:26.548219   71529 ssh_runner.go:195] Run: openssl version
	I0910 18:59:26.553969   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:26.564950   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569539   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569594   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.575677   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:26.586342   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:26.606946   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611671   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611720   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.617271   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:26.627833   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:26.638225   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642722   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642759   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.648359   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:26.659003   71529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:26.663236   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:26.668896   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:26.674346   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:26.680028   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:26.685462   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:26.691097   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:26.696620   71529 kubeadm.go:392] StartCluster: {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:26.696704   71529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:26.696746   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.733823   71529 cri.go:89] found id: ""
	I0910 18:59:26.733883   71529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:26.744565   71529 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:26.744584   71529 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:26.744620   71529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:26.754754   71529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:26.755687   71529 kubeconfig.go:125] found "no-preload-347802" server: "https://192.168.50.138:8443"
	I0910 18:59:26.757732   71529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:26.767140   71529 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.138
	I0910 18:59:26.767167   71529 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:26.767180   71529 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:26.767235   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.805555   71529 cri.go:89] found id: ""
	I0910 18:59:26.805616   71529 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:26.822806   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:26.832434   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:26.832456   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:26.832499   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:26.841225   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:26.841288   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:26.850145   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:26.859016   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:26.859070   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:26.868806   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.877814   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:26.877867   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.886985   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:26.895859   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:26.895911   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:26.905600   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:26.915716   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:27.038963   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:30.202285   72122 start.go:364] duration metric: took 3m27.611616445s to acquireMachinesLock for "old-k8s-version-432422"
	I0910 18:59:30.202346   72122 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:30.202377   72122 fix.go:54] fixHost starting: 
	I0910 18:59:30.202807   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:30.202842   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:30.222440   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0910 18:59:30.222927   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:30.223415   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:59:30.223435   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:30.223748   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:30.223905   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:30.224034   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetState
	I0910 18:59:30.225464   72122 fix.go:112] recreateIfNeeded on old-k8s-version-432422: state=Stopped err=<nil>
	I0910 18:59:30.225505   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	W0910 18:59:30.225655   72122 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:30.227698   72122 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-432422" ...
	I0910 18:59:28.790020   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790390   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Found IP for machine: 192.168.72.54
	I0910 18:59:28.790424   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has current primary IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790435   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserving static IP address...
	I0910 18:59:28.790758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserved static IP address: 192.168.72.54
	I0910 18:59:28.790780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for SSH to be available...
	I0910 18:59:28.790811   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.790839   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | skip adding static IP to network mk-default-k8s-diff-port-557504 - found existing host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"}
	I0910 18:59:28.790856   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Getting to WaitForSSH function...
	I0910 18:59:28.792644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.792947   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.792978   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.793114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH client type: external
	I0910 18:59:28.793135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa (-rw-------)
	I0910 18:59:28.793192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:28.793242   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | About to run SSH command:
	I0910 18:59:28.793272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | exit 0
	I0910 18:59:28.921644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:28.921983   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetConfigRaw
	I0910 18:59:28.922663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:28.925273   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925614   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.925639   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925884   71627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/config.json ...
	I0910 18:59:28.926061   71627 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:28.926077   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:28.926272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:28.928411   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928731   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.928758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928909   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:28.929096   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929249   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929371   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:28.929552   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:28.929722   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:28.929732   71627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:29.041454   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:29.041486   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041745   71627 buildroot.go:166] provisioning hostname "default-k8s-diff-port-557504"
	I0910 18:59:29.041766   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041965   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.044784   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.045182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045358   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.045528   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045705   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.045968   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.046158   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.046173   71627 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-557504 && echo "default-k8s-diff-port-557504" | sudo tee /etc/hostname
	I0910 18:59:29.180227   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-557504
	
	I0910 18:59:29.180257   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.182815   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183166   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.183200   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183416   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.183612   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183779   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183883   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.184053   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.184258   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.184276   71627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-557504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-557504/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-557504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:29.315908   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:29.315942   71627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:29.315981   71627 buildroot.go:174] setting up certificates
	I0910 18:59:29.315996   71627 provision.go:84] configureAuth start
	I0910 18:59:29.316013   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.316262   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:29.319207   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319580   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.319609   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.321973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322318   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.322352   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322499   71627 provision.go:143] copyHostCerts
	I0910 18:59:29.322564   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:29.322577   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:29.322647   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:29.322772   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:29.322786   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:29.322832   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:29.322938   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:29.322951   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:29.322986   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:29.323065   71627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-557504 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-557504 localhost minikube]
	I0910 18:59:29.488131   71627 provision.go:177] copyRemoteCerts
	I0910 18:59:29.488187   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:29.488210   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.491095   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491441   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.491467   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491666   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.491830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.491973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.492123   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:29.584016   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:29.614749   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0910 18:59:29.646904   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:29.677788   71627 provision.go:87] duration metric: took 361.777725ms to configureAuth
	I0910 18:59:29.677820   71627 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:29.678048   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:29.678135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.680932   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681372   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.681394   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681674   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.681868   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682175   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.682431   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.682638   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.682665   71627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:29.934027   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:29.934058   71627 machine.go:96] duration metric: took 1.007985288s to provisionDockerMachine
	I0910 18:59:29.934071   71627 start.go:293] postStartSetup for "default-k8s-diff-port-557504" (driver="kvm2")
	I0910 18:59:29.934084   71627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:29.934104   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:29.934415   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:29.934447   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.937552   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.937917   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.937948   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.938110   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.938315   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.938496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.938645   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.030842   71627 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:30.036158   71627 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:30.036180   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:30.036267   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:30.036380   71627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:30.036520   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:30.048860   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:30.075362   71627 start.go:296] duration metric: took 141.276186ms for postStartSetup
	I0910 18:59:30.075398   71627 fix.go:56] duration metric: took 20.817735357s for fixHost
	I0910 18:59:30.075421   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.078501   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.078996   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.079026   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.079195   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.079373   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079561   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079704   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.079908   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:30.080089   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:30.080102   71627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:30.202112   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994770.178719125
	
	I0910 18:59:30.202139   71627 fix.go:216] guest clock: 1725994770.178719125
	I0910 18:59:30.202149   71627 fix.go:229] Guest: 2024-09-10 18:59:30.178719125 +0000 UTC Remote: 2024-09-10 18:59:30.075402937 +0000 UTC m=+288.723404352 (delta=103.316188ms)
	I0910 18:59:30.202175   71627 fix.go:200] guest clock delta is within tolerance: 103.316188ms
	I0910 18:59:30.202184   71627 start.go:83] releasing machines lock for "default-k8s-diff-port-557504", held for 20.944552577s
	I0910 18:59:30.202221   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.202522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:30.205728   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206068   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.206101   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206267   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.206830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207100   71627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:30.207171   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.207378   71627 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:30.207399   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.209851   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210130   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210220   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210400   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210553   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210555   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210625   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210735   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210785   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.210849   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210949   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.211002   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.211132   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.317738   71627 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:30.325333   71627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:30.485483   71627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:30.492979   71627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:30.493064   71627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:30.518974   71627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:30.518998   71627 start.go:495] detecting cgroup driver to use...
	I0910 18:59:30.519192   71627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:30.539578   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:30.554986   71627 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:30.555045   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:30.570454   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:30.590125   71627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:30.738819   71627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:30.930750   71627 docker.go:233] disabling docker service ...
	I0910 18:59:30.930811   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:30.946226   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:30.961633   71627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:31.086069   71627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:31.208629   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:31.225988   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:31.248059   71627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:31.248127   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.260212   71627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:31.260296   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.271128   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.282002   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.296901   71627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:31.309739   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.325469   71627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.350404   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.366130   71627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:31.379206   71627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:31.379259   71627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:31.395015   71627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:31.406339   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:31.538783   71627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:31.656815   71627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:31.656886   71627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:31.665263   71627 start.go:563] Will wait 60s for crictl version
	I0910 18:59:31.665333   71627 ssh_runner.go:195] Run: which crictl
	I0910 18:59:31.670317   71627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:31.719549   71627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:31.719641   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.753801   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.787092   71627 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:28.257536   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.218537615s)
	I0910 18:59:28.257562   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.451173   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.516432   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.605746   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:28.605823   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.106870   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.606340   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.623814   71529 api_server.go:72] duration metric: took 1.018071553s to wait for apiserver process to appear ...
	I0910 18:59:29.623842   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:29.623864   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:29.624282   71529 api_server.go:269] stopped: https://192.168.50.138:8443/healthz: Get "https://192.168.50.138:8443/healthz": dial tcp 192.168.50.138:8443: connect: connection refused
	I0910 18:59:30.124145   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:30.228896   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .Start
	I0910 18:59:30.229066   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring networks are active...
	I0910 18:59:30.229735   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network default is active
	I0910 18:59:30.230126   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network mk-old-k8s-version-432422 is active
	I0910 18:59:30.230559   72122 main.go:141] libmachine: (old-k8s-version-432422) Getting domain xml...
	I0910 18:59:30.231206   72122 main.go:141] libmachine: (old-k8s-version-432422) Creating domain...
	I0910 18:59:31.669616   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting to get IP...
	I0910 18:59:31.670682   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.671124   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.671225   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.671101   72995 retry.go:31] will retry after 285.109621ms: waiting for machine to come up
	I0910 18:59:31.957711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.958140   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.958169   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.958103   72995 retry.go:31] will retry after 306.703176ms: waiting for machine to come up
	I0910 18:59:32.266797   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.267299   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.267333   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.267226   72995 retry.go:31] will retry after 327.953362ms: waiting for machine to come up
	I0910 18:59:32.494151   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.494177   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.494193   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.550283   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.550317   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.624486   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.646548   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:32.646583   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.124697   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.139775   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.139814   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.623998   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.632392   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.632430   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:34.123979   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:34.133552   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 18:59:34.143511   71529 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:34.143543   71529 api_server.go:131] duration metric: took 4.519693435s to wait for apiserver health ...
	I0910 18:59:34.143552   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:34.143558   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:34.145562   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:31.788472   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:31.791698   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792063   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:31.792102   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792342   71627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:31.798045   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:31.814552   71627 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:31.814718   71627 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:31.814775   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:31.863576   71627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:31.863655   71627 ssh_runner.go:195] Run: which lz4
	I0910 18:59:31.868776   71627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:31.874162   71627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:31.874194   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 18:59:33.358271   71627 crio.go:462] duration metric: took 1.489531006s to copy over tarball
	I0910 18:59:33.358356   71627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:35.759805   71627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.401424942s)
	I0910 18:59:35.759833   71627 crio.go:469] duration metric: took 2.401529016s to extract the tarball
	I0910 18:59:35.759842   71627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:35.797349   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:35.849544   71627 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:59:35.849571   71627 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:59:35.849583   71627 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.0 crio true true} ...
	I0910 18:59:35.849706   71627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-557504 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:35.849783   71627 ssh_runner.go:195] Run: crio config
	I0910 18:59:35.896486   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:35.896514   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:35.896534   71627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:35.896556   71627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-557504 NodeName:default-k8s-diff-port-557504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:35.896707   71627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-557504"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:35.896777   71627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:35.907249   71627 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:35.907337   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:35.917196   71627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0910 18:59:35.935072   71627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:35.953823   71627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0910 18:59:35.970728   71627 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:35.974648   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:35.986487   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:36.144443   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:36.164942   71627 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504 for IP: 192.168.72.54
	I0910 18:59:36.164972   71627 certs.go:194] generating shared ca certs ...
	I0910 18:59:36.164990   71627 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:36.165172   71627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:36.165242   71627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:36.165255   71627 certs.go:256] generating profile certs ...
	I0910 18:59:36.165382   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/client.key
	I0910 18:59:36.165460   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key.5cc31a18
	I0910 18:59:36.165505   71627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key
	I0910 18:59:36.165640   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:36.165680   71627 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:36.165700   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:36.165733   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:36.165770   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:36.165803   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:36.165874   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:36.166687   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:36.203302   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:36.230599   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:36.269735   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:36.311674   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0910 18:59:36.354614   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:59:36.379082   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:34.146903   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:34.163037   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:34.189830   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:34.200702   71529 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:34.200751   71529 system_pods.go:61] "coredns-6f6b679f8f-54rpl" [2e301d43-a54a-4836-abf8-a45f5bc15889] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:34.200762   71529 system_pods.go:61] "etcd-no-preload-347802" [0fdffb97-72c6-4588-9593-46bcbed0a9fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:34.200773   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [3cf5abac-1d94-4ee2-a962-9daad308ec8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:34.200782   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6769757d-57fd-46c8-8f78-d20f80e592d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:34.200788   71529 system_pods.go:61] "kube-proxy-7v9n8" [d01842ad-3dae-49e1-8570-db9bcf4d0afc] Running
	I0910 18:59:34.200797   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [20e59c6b-4387-4dd0-b242-78d107775275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:34.200804   71529 system_pods.go:61] "metrics-server-6867b74b74-w8rqv" [52535081-4503-4136-963d-6b2db6c0224e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:34.200809   71529 system_pods.go:61] "storage-provisioner" [9f7c0178-7194-4c73-95a4-5a3c0091f3ac] Running
	I0910 18:59:34.200816   71529 system_pods.go:74] duration metric: took 10.965409ms to wait for pod list to return data ...
	I0910 18:59:34.200857   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:34.204544   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:34.204568   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:34.204580   71529 node_conditions.go:105] duration metric: took 3.714534ms to run NodePressure ...
	I0910 18:59:34.204597   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:34.487106   71529 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491817   71529 kubeadm.go:739] kubelet initialised
	I0910 18:59:34.491838   71529 kubeadm.go:740] duration metric: took 4.708046ms waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491845   71529 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:34.496604   71529 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.501535   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501553   71529 pod_ready.go:82] duration metric: took 4.927724ms for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.501561   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501567   71529 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.505473   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505491   71529 pod_ready.go:82] duration metric: took 3.917111ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.505499   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505507   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.510025   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510043   71529 pod_ready.go:82] duration metric: took 4.522609ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.510050   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510056   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:36.519023   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:32.597017   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.597589   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.597616   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.597554   72995 retry.go:31] will retry after 448.654363ms: waiting for machine to come up
	I0910 18:59:33.048100   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.048559   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.048590   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.048478   72995 retry.go:31] will retry after 654.829574ms: waiting for machine to come up
	I0910 18:59:33.704902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.705446   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.705475   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.705363   72995 retry.go:31] will retry after 610.514078ms: waiting for machine to come up
	I0910 18:59:34.316978   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:34.317481   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:34.317503   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:34.317430   72995 retry.go:31] will retry after 1.125805817s: waiting for machine to come up
	I0910 18:59:35.444880   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:35.445369   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:35.445394   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:35.445312   72995 retry.go:31] will retry after 1.484426931s: waiting for machine to come up
	I0910 18:59:36.931028   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:36.931568   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:36.931613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:36.931524   72995 retry.go:31] will retry after 1.819998768s: waiting for machine to come up
	I0910 18:59:36.403353   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:36.427345   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:36.452765   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:36.485795   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:36.512944   71627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:36.532454   71627 ssh_runner.go:195] Run: openssl version
	I0910 18:59:36.538449   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:36.550806   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555761   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555819   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.562430   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:36.573730   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:36.584987   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589551   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589615   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.595496   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:36.607821   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:36.620298   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624888   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624939   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.630534   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:36.641657   71627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:36.646317   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:36.652748   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:36.661166   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:36.670240   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:36.676776   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:36.686442   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:36.693233   71627 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:36.693351   71627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:36.693414   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.743159   71627 cri.go:89] found id: ""
	I0910 18:59:36.743256   71627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:36.754428   71627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:36.754451   71627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:36.754505   71627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:36.765126   71627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:36.766213   71627 kubeconfig.go:125] found "default-k8s-diff-port-557504" server: "https://192.168.72.54:8444"
	I0910 18:59:36.768428   71627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:36.778678   71627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I0910 18:59:36.778715   71627 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:36.778728   71627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:36.778779   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.824031   71627 cri.go:89] found id: ""
	I0910 18:59:36.824107   71627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:36.840585   71627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:36.851445   71627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:36.851462   71627 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:36.851508   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0910 18:59:36.860630   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:36.860682   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:36.869973   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0910 18:59:36.880034   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:36.880099   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:36.889684   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.898786   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:36.898870   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.908328   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0910 18:59:36.917272   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:36.917334   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:36.928923   71627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:36.940238   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.079143   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.945317   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.157807   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.245283   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.353653   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:38.353746   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:38.854791   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.354743   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.409511   71627 api_server.go:72] duration metric: took 1.055855393s to wait for apiserver process to appear ...
	I0910 18:59:39.409543   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:39.409566   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.410104   71627 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I0910 18:59:39.909665   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.018802   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:41.517911   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:38.753463   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:38.754076   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:38.754107   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:38.754019   72995 retry.go:31] will retry after 2.258214375s: waiting for machine to come up
	I0910 18:59:41.013524   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:41.013988   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:41.014011   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:41.013910   72995 retry.go:31] will retry after 2.030553777s: waiting for machine to come up
	I0910 18:59:41.976133   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:41.976166   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:41.976179   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.080631   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.080674   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.409865   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.421093   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.421174   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.910272   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.914729   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.914757   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:43.410280   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:43.414731   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 18:59:43.421135   71627 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:43.421163   71627 api_server.go:131] duration metric: took 4.011612782s to wait for apiserver health ...
	I0910 18:59:43.421172   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:43.421178   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:43.423063   71627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:43.424278   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:43.434823   71627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:43.461604   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:43.477566   71627 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:43.477592   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:43.477600   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:43.477606   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:43.477616   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:43.477623   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 18:59:43.477631   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:43.477638   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:43.477648   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 18:59:43.477658   71627 system_pods.go:74] duration metric: took 16.035701ms to wait for pod list to return data ...
	I0910 18:59:43.477673   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:43.485818   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:43.485840   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:43.485850   71627 node_conditions.go:105] duration metric: took 8.173642ms to run NodePressure ...
	I0910 18:59:43.485864   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:43.752422   71627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756713   71627 kubeadm.go:739] kubelet initialised
	I0910 18:59:43.756735   71627 kubeadm.go:740] duration metric: took 4.285787ms waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756744   71627 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:43.762384   71627 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.767080   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767099   71627 pod_ready.go:82] duration metric: took 4.695864ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.767109   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767116   71627 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.772560   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772579   71627 pod_ready.go:82] duration metric: took 5.453737ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.772588   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772593   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.776328   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776345   71627 pod_ready.go:82] duration metric: took 3.745149ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.776352   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776357   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.865825   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865850   71627 pod_ready.go:82] duration metric: took 89.48636ms for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.865862   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865868   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.264892   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264922   71627 pod_ready.go:82] duration metric: took 399.047611ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.264932   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264938   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.665376   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665402   71627 pod_ready.go:82] duration metric: took 400.457184ms for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.665413   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665418   71627 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:45.065696   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065724   71627 pod_ready.go:82] duration metric: took 400.298527ms for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:45.065736   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065743   71627 pod_ready.go:39] duration metric: took 1.308988307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:45.065759   71627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:59:45.077813   71627 ops.go:34] apiserver oom_adj: -16
	I0910 18:59:45.077838   71627 kubeadm.go:597] duration metric: took 8.323378955s to restartPrimaryControlPlane
	I0910 18:59:45.077846   71627 kubeadm.go:394] duration metric: took 8.384626167s to StartCluster
	I0910 18:59:45.077860   71627 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.077980   71627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:45.079979   71627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.080304   71627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 18:59:45.080399   71627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 18:59:45.080478   71627 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080510   71627 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080506   71627 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-557504"
	W0910 18:59:45.080523   71627 addons.go:243] addon storage-provisioner should already be in state true
	I0910 18:59:45.080519   71627 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080553   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080568   71627 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080568   71627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-557504"
	W0910 18:59:45.080582   71627 addons.go:243] addon metrics-server should already be in state true
	I0910 18:59:45.080529   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:45.080608   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080906   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080932   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.080989   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080994   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081905   71627 out.go:177] * Verifying Kubernetes components...
	I0910 18:59:45.083206   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:45.096019   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0910 18:59:45.096288   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0910 18:59:45.096453   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096730   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096984   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097012   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097243   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097273   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097401   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.097596   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.097678   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0910 18:59:45.097693   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.098049   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.098464   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.098504   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.099185   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.099207   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.099592   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.100125   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.100166   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.101159   71627 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-557504"
	W0910 18:59:45.101175   71627 addons.go:243] addon default-storageclass should already be in state true
	I0910 18:59:45.101203   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.101501   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.101537   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.114823   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0910 18:59:45.115253   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.115363   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0910 18:59:45.115737   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.115759   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.115795   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.116106   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.116244   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.116270   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.116289   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.116696   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.117290   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.117327   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.117546   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0910 18:59:45.117879   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.118496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.118631   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.118643   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.118949   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.119107   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.120353   71627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 18:59:45.120775   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.121685   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 18:59:45.121699   71627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 18:59:45.121718   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.122500   71627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:45.123762   71627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.123778   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:59:45.123792   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.125345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.125926   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.126161   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.126357   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.125943   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.126548   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.126661   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.127075   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127507   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.127522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127675   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.127810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.127905   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.127997   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.132978   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0910 18:59:45.133303   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.133757   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.133779   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.134043   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.134188   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.135712   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.135917   71627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.135928   71627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:59:45.135938   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.138375   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138616   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.138629   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138768   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.138937   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.139054   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.139181   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.293036   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:45.311747   71627 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:45.425820   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 18:59:45.425852   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 18:59:45.430783   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.441452   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.481245   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 18:59:45.481268   71627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 18:59:45.573348   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:45.573373   71627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 18:59:45.634830   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:46.589194   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147713188s)
	I0910 18:59:46.589253   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589266   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589284   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589311   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589321   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158508631s)
	I0910 18:59:46.589343   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589355   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589723   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589729   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589730   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589736   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589738   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589741   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589751   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589752   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589761   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589774   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589816   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589755   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589852   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589961   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589971   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.590192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.590207   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.590220   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591675   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.591692   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591702   71627 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:46.595906   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.595921   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.596105   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.596126   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.598033   71627 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0910 18:59:44.023282   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:46.516768   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:47.016400   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.016423   71529 pod_ready.go:82] duration metric: took 12.506359172s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.016435   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020809   71529 pod_ready.go:93] pod "kube-proxy-7v9n8" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.020827   71529 pod_ready.go:82] duration metric: took 4.386051ms for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020836   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.046937   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:43.047363   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:43.047393   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:43.047314   72995 retry.go:31] will retry after 2.233047134s: waiting for machine to come up
	I0910 18:59:45.282610   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:45.283104   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:45.283133   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:45.283026   72995 retry.go:31] will retry after 4.238676711s: waiting for machine to come up
	I0910 18:59:51.182133   71183 start.go:364] duration metric: took 56.422548201s to acquireMachinesLock for "embed-certs-836868"
	I0910 18:59:51.182195   71183 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:51.182206   71183 fix.go:54] fixHost starting: 
	I0910 18:59:51.182600   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:51.182637   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:51.198943   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0910 18:59:51.199345   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:51.199803   71183 main.go:141] libmachine: Using API Version  1
	I0910 18:59:51.199828   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:51.200153   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:51.200364   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 18:59:51.200493   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 18:59:51.202100   71183 fix.go:112] recreateIfNeeded on embed-certs-836868: state=Stopped err=<nil>
	I0910 18:59:51.202123   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	W0910 18:59:51.202286   71183 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:51.204028   71183 out.go:177] * Restarting existing kvm2 VM for "embed-certs-836868" ...
	I0910 18:59:46.599125   71627 addons.go:510] duration metric: took 1.518742666s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0910 18:59:47.316003   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.316691   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.027374   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:49.027393   71529 pod_ready.go:82] duration metric: took 2.006551523s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:49.027403   71529 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:51.034568   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:51.205180   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Start
	I0910 18:59:51.205332   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring networks are active...
	I0910 18:59:51.205952   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network default is active
	I0910 18:59:51.206322   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network mk-embed-certs-836868 is active
	I0910 18:59:51.206717   71183 main.go:141] libmachine: (embed-certs-836868) Getting domain xml...
	I0910 18:59:51.207430   71183 main.go:141] libmachine: (embed-certs-836868) Creating domain...
	I0910 18:59:49.526000   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.526536   72122 main.go:141] libmachine: (old-k8s-version-432422) Found IP for machine: 192.168.61.51
	I0910 18:59:49.526558   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserving static IP address...
	I0910 18:59:49.526569   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has current primary IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.527018   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.527063   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | skip adding static IP to network mk-old-k8s-version-432422 - found existing host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"}
	I0910 18:59:49.527084   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserved static IP address: 192.168.61.51
	I0910 18:59:49.527099   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting for SSH to be available...
	I0910 18:59:49.527113   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Getting to WaitForSSH function...
	I0910 18:59:49.529544   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.529962   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.529987   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.530143   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH client type: external
	I0910 18:59:49.530170   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa (-rw-------)
	I0910 18:59:49.530195   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:49.530208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | About to run SSH command:
	I0910 18:59:49.530245   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | exit 0
	I0910 18:59:49.656944   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:49.657307   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:59:49.657926   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:49.660332   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660689   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.660711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660992   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:59:49.661238   72122 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:49.661259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:49.661480   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.663824   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.664236   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664370   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.664565   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664712   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664887   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.665103   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.665392   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.665406   72122 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:49.769433   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:49.769468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769716   72122 buildroot.go:166] provisioning hostname "old-k8s-version-432422"
	I0910 18:59:49.769740   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769918   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.772324   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772710   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.772736   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772875   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.773061   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773245   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773384   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.773554   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.773751   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.773764   72122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-432422 && echo "old-k8s-version-432422" | sudo tee /etc/hostname
	I0910 18:59:49.891230   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-432422
	
	I0910 18:59:49.891259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.894272   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894641   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.894683   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894820   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.894983   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895210   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.895330   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.895540   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.895559   72122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-432422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-432422/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-432422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:50.011767   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:50.011795   72122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:50.011843   72122 buildroot.go:174] setting up certificates
	I0910 18:59:50.011854   72122 provision.go:84] configureAuth start
	I0910 18:59:50.011866   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:50.012185   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:50.014947   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015352   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.015388   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015549   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.017712   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018002   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.018036   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018193   72122 provision.go:143] copyHostCerts
	I0910 18:59:50.018251   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:50.018265   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:50.018337   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:50.018481   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:50.018491   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:50.018513   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:50.018585   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:50.018594   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:50.018612   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:50.018667   72122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-432422 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-432422]
	I0910 18:59:50.528798   72122 provision.go:177] copyRemoteCerts
	I0910 18:59:50.528864   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:50.528900   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.532154   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532576   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.532613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532765   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.532995   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.533205   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.533370   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:50.620169   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0910 18:59:50.647163   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:50.679214   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:50.704333   72122 provision.go:87] duration metric: took 692.46607ms to configureAuth
	I0910 18:59:50.704360   72122 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:50.704545   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:59:50.704639   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.707529   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.707903   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.707931   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.708082   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.708297   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708463   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708641   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.708786   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:50.708954   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:50.708969   72122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:50.935375   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:50.935403   72122 machine.go:96] duration metric: took 1.274152353s to provisionDockerMachine
	I0910 18:59:50.935414   72122 start.go:293] postStartSetup for "old-k8s-version-432422" (driver="kvm2")
	I0910 18:59:50.935424   72122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:50.935448   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:50.935763   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:50.935796   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.938507   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.938865   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.938902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.939008   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.939198   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.939529   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.939689   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.024726   72122 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:51.029522   72122 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:51.029547   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:51.029632   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:51.029734   72122 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:51.029848   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:51.042454   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:51.068748   72122 start.go:296] duration metric: took 133.318275ms for postStartSetup
	I0910 18:59:51.068792   72122 fix.go:56] duration metric: took 20.866428313s for fixHost
	I0910 18:59:51.068816   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.071533   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.071894   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.071921   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.072072   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.072264   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072616   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.072784   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:51.072938   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:51.072948   72122 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:51.181996   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994791.151610055
	
	I0910 18:59:51.182016   72122 fix.go:216] guest clock: 1725994791.151610055
	I0910 18:59:51.182024   72122 fix.go:229] Guest: 2024-09-10 18:59:51.151610055 +0000 UTC Remote: 2024-09-10 18:59:51.068796263 +0000 UTC m=+228.614166738 (delta=82.813792ms)
	I0910 18:59:51.182048   72122 fix.go:200] guest clock delta is within tolerance: 82.813792ms
	I0910 18:59:51.182055   72122 start.go:83] releasing machines lock for "old-k8s-version-432422", held for 20.979733564s
	I0910 18:59:51.182094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.182331   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:51.184857   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185183   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.185212   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185346   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.185840   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186006   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186079   72122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:51.186143   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.186215   72122 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:51.186238   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.189304   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189674   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.189698   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189765   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189879   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190057   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190212   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190230   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.190255   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.190358   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.190470   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190652   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190817   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190948   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.296968   72122 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:51.303144   72122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:51.447027   72122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:51.454963   72122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:51.455032   72122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:51.474857   72122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:51.474882   72122 start.go:495] detecting cgroup driver to use...
	I0910 18:59:51.474957   72122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:51.490457   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:51.504502   72122 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:51.504569   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:51.523331   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:51.543438   72122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:51.678734   72122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:51.831736   72122 docker.go:233] disabling docker service ...
	I0910 18:59:51.831804   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:51.846805   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:51.865771   72122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:52.012922   72122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:52.161595   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:52.180034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:52.200984   72122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0910 18:59:52.201041   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.211927   72122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:52.211989   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.223601   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.234211   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.246209   72122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:52.264079   72122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:52.277144   72122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:52.277204   72122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:52.292683   72122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:52.304601   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:52.421971   72122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:52.544386   72122 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:52.544459   72122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:52.551436   72122 start.go:563] Will wait 60s for crictl version
	I0910 18:59:52.551487   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:52.555614   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:52.598031   72122 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:52.598128   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.629578   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.662403   72122 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0910 18:59:51.815436   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:52.816775   71627 node_ready.go:49] node "default-k8s-diff-port-557504" has status "Ready":"True"
	I0910 18:59:52.816809   71627 node_ready.go:38] duration metric: took 7.505015999s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:52.816821   71627 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:52.823528   71627 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829667   71627 pod_ready.go:93] pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.829688   71627 pod_ready.go:82] duration metric: took 6.135159ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829696   71627 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833912   71627 pod_ready.go:93] pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.833933   71627 pod_ready.go:82] duration metric: took 4.231672ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833942   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838863   71627 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.838883   71627 pod_ready.go:82] duration metric: took 4.934379ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838897   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851413   71627 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:53.851437   71627 pod_ready.go:82] duration metric: took 1.012531075s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851447   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020886   71627 pod_ready.go:93] pod "kube-proxy-4t8r9" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:54.020910   71627 pod_ready.go:82] duration metric: took 169.456474ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020926   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217416   71627 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:55.217440   71627 pod_ready.go:82] duration metric: took 1.196506075s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217451   71627 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.036769   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:55.536544   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:52.544041   71183 main.go:141] libmachine: (embed-certs-836868) Waiting to get IP...
	I0910 18:59:52.545001   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.545522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.545586   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.545494   73202 retry.go:31] will retry after 260.451431ms: waiting for machine to come up
	I0910 18:59:52.807914   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.808351   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.808377   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.808307   73202 retry.go:31] will retry after 340.526757ms: waiting for machine to come up
	I0910 18:59:53.150854   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.151446   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.151476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.151404   73202 retry.go:31] will retry after 470.620322ms: waiting for machine to come up
	I0910 18:59:53.624169   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.624709   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.624747   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.624657   73202 retry.go:31] will retry after 529.186273ms: waiting for machine to come up
	I0910 18:59:54.155156   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.155644   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.155673   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.155599   73202 retry.go:31] will retry after 575.877001ms: waiting for machine to come up
	I0910 18:59:54.733522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.734049   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.734092   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.734000   73202 retry.go:31] will retry after 577.385946ms: waiting for machine to come up
	I0910 18:59:55.312705   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:55.313087   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:55.313114   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:55.313059   73202 retry.go:31] will retry after 735.788809ms: waiting for machine to come up
	I0910 18:59:56.049771   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:56.050272   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:56.050306   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:56.050224   73202 retry.go:31] will retry after 1.433431053s: waiting for machine to come up
	I0910 18:59:52.663465   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:52.666401   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.666796   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:52.666843   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.667002   72122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:52.672338   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:52.688427   72122 kubeadm.go:883] updating cluster {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:52.688559   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:59:52.688623   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:52.740370   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:52.740447   72122 ssh_runner.go:195] Run: which lz4
	I0910 18:59:52.744925   72122 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:52.749840   72122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:52.749872   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0910 18:59:54.437031   72122 crio.go:462] duration metric: took 1.692132914s to copy over tarball
	I0910 18:59:54.437124   72122 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:57.462705   72122 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025545297s)
	I0910 18:59:57.462743   72122 crio.go:469] duration metric: took 3.025690485s to extract the tarball
	I0910 18:59:57.462753   72122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:57.223959   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:59.224657   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:01.224783   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:58.035610   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:00.535779   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:57.485417   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:57.485870   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:57.485896   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:57.485815   73202 retry.go:31] will retry after 1.638565814s: waiting for machine to come up
	I0910 18:59:59.126134   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:59.126625   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:59.126657   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:59.126576   73202 retry.go:31] will retry after 2.127929201s: waiting for machine to come up
	I0910 19:00:01.256121   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:01.256665   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:01.256694   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:01.256612   73202 retry.go:31] will retry after 2.530100505s: waiting for machine to come up
	I0910 18:59:57.508817   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:57.551327   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:57.551350   72122 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:57.551434   72122 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.551704   72122 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.551776   72122 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.552000   72122 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.551807   72122 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.551846   72122 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.551714   72122 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0910 18:59:57.551917   72122 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.553642   72122 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.553660   72122 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.553917   72122 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.553935   72122 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 18:59:57.554014   72122 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.554160   72122 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.554376   72122 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.554662   72122 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.726191   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.742799   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.745264   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.753214   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.768122   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.770828   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 18:59:57.774835   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.807657   72122 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0910 18:59:57.807693   72122 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.807733   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908662   72122 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0910 18:59:57.908678   72122 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0910 18:59:57.908707   72122 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.908711   72122 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.908759   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908760   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920214   72122 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0910 18:59:57.920248   72122 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0910 18:59:57.920258   72122 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.920280   72122 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.920304   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920313   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.937914   72122 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0910 18:59:57.937952   72122 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.937958   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.937999   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.938033   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.938006   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.938073   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.938063   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.938157   72122 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0910 18:59:57.938185   72122 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0910 18:59:57.938215   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:58.044082   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.044139   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.044146   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.044173   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.045813   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.045816   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.045849   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.198804   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.198841   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.198881   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.198944   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.198978   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.199000   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.199081   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.353153   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0910 18:59:58.353217   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0910 18:59:58.353232   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0910 18:59:58.353277   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0910 18:59:58.359353   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.359363   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0910 18:59:58.359421   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.386872   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:58.407734   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0910 18:59:58.425479   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0910 18:59:58.553340   72122 cache_images.go:92] duration metric: took 1.001972084s to LoadCachedImages
	W0910 18:59:58.553438   72122 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0910 18:59:58.553455   72122 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0910 18:59:58.553634   72122 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-432422 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:58.553722   72122 ssh_runner.go:195] Run: crio config
	I0910 18:59:58.605518   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:59:58.605542   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:58.605554   72122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:58.605577   72122 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-432422 NodeName:old-k8s-version-432422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 18:59:58.605744   72122 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-432422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:58.605814   72122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0910 18:59:58.618033   72122 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:58.618096   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:58.629175   72122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0910 18:59:58.653830   72122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:58.679797   72122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0910 18:59:58.698692   72122 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:58.702565   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:58.715128   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:58.858262   72122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:58.876681   72122 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422 for IP: 192.168.61.51
	I0910 18:59:58.876719   72122 certs.go:194] generating shared ca certs ...
	I0910 18:59:58.876740   72122 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:58.876921   72122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:58.876983   72122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:58.876996   72122 certs.go:256] generating profile certs ...
	I0910 18:59:58.877129   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key
	I0910 18:59:58.877210   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b
	I0910 18:59:58.877264   72122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key
	I0910 18:59:58.877424   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:58.877473   72122 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:58.877491   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:58.877528   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:58.877560   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:58.877591   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:58.877648   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:58.878410   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:58.936013   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:58.969736   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:59.017414   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:59.063599   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 18:59:59.093934   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:59.138026   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:59.166507   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:59.196972   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:59.223596   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:59.250627   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:59.279886   72122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:59.300491   72122 ssh_runner.go:195] Run: openssl version
	I0910 18:59:59.306521   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:59.317238   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321625   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321682   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.327532   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:59.339028   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:59.350578   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355025   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355106   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.360701   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:59.375040   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:59.389867   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395829   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395890   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.402425   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:59.414077   72122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:59.418909   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:59.425061   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:59.431213   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:59.437581   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:59.443603   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:59.449820   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:59.456100   72122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:59.456189   72122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:59.456234   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.497167   72122 cri.go:89] found id: ""
	I0910 18:59:59.497227   72122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:59.508449   72122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:59.508474   72122 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:59.508527   72122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:59.521416   72122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:59.522489   72122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:59.523125   72122 kubeconfig.go:62] /home/jenkins/minikube-integration/19598-5973/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-432422" cluster setting kubeconfig missing "old-k8s-version-432422" context setting]
	I0910 18:59:59.524107   72122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:59.637793   72122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:59.651879   72122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0910 18:59:59.651916   72122 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:59.651930   72122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:59.651989   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.691857   72122 cri.go:89] found id: ""
	I0910 18:59:59.691922   72122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:59.708610   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:59.718680   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:59.718702   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:59.718755   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:59.729965   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:59.730028   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:59.740037   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:59.750640   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:59.750706   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:59.762436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.773456   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:59.773522   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.783438   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:59.792996   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:59.793056   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:59.805000   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:59.815384   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:59.955068   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:00.842403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.102530   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.212897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.340128   72122 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:01.340217   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:01.841004   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:02.340913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.225898   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.723882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.034295   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.034431   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.790275   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:03.790710   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:03.790736   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:03.790662   73202 retry.go:31] will retry after 3.202952028s: waiting for machine to come up
	I0910 19:00:06.995302   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:06.996124   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:06.996149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:06.996073   73202 retry.go:31] will retry after 3.076425277s: waiting for machine to come up
	I0910 19:00:02.840935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.340938   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.840669   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.341213   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.841274   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.340698   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.841152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.340425   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.841001   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.341198   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.724121   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.223744   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:07.533428   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:09.534830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.033655   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.075125   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075606   71183 main.go:141] libmachine: (embed-certs-836868) Found IP for machine: 192.168.39.107
	I0910 19:00:10.075634   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has current primary IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075643   71183 main.go:141] libmachine: (embed-certs-836868) Reserving static IP address...
	I0910 19:00:10.076046   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.076075   71183 main.go:141] libmachine: (embed-certs-836868) DBG | skip adding static IP to network mk-embed-certs-836868 - found existing host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"}
	I0910 19:00:10.076103   71183 main.go:141] libmachine: (embed-certs-836868) Reserved static IP address: 192.168.39.107
	I0910 19:00:10.076122   71183 main.go:141] libmachine: (embed-certs-836868) Waiting for SSH to be available...
	I0910 19:00:10.076133   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Getting to WaitForSSH function...
	I0910 19:00:10.078039   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078327   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.078352   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078452   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH client type: external
	I0910 19:00:10.078475   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa (-rw-------)
	I0910 19:00:10.078514   71183 main.go:141] libmachine: (embed-certs-836868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 19:00:10.078527   71183 main.go:141] libmachine: (embed-certs-836868) DBG | About to run SSH command:
	I0910 19:00:10.078548   71183 main.go:141] libmachine: (embed-certs-836868) DBG | exit 0
	I0910 19:00:10.201403   71183 main.go:141] libmachine: (embed-certs-836868) DBG | SSH cmd err, output: <nil>: 
	I0910 19:00:10.201748   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetConfigRaw
	I0910 19:00:10.202405   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.204760   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205130   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.205160   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205408   71183 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/config.json ...
	I0910 19:00:10.205697   71183 machine.go:93] provisionDockerMachine start ...
	I0910 19:00:10.205714   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.205924   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.208095   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208394   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.208418   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208534   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.208712   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208856   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208958   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.209193   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.209412   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.209427   71183 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:00:10.313247   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:00:10.313278   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313556   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 19:00:10.313584   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313765   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.316135   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316569   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.316592   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316739   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.316893   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317046   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317165   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.317288   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.317490   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.317506   71183 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-836868 && echo "embed-certs-836868" | sudo tee /etc/hostname
	I0910 19:00:10.433585   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-836868
	
	I0910 19:00:10.433608   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.436076   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436407   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.436440   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.436826   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.436972   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.437146   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.437314   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.437480   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.437495   71183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-836868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-836868/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-836868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:00:10.546105   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:00:10.546146   71183 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 19:00:10.546186   71183 buildroot.go:174] setting up certificates
	I0910 19:00:10.546197   71183 provision.go:84] configureAuth start
	I0910 19:00:10.546214   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.546485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.549236   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549567   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.549594   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549696   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.551807   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552162   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.552195   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552326   71183 provision.go:143] copyHostCerts
	I0910 19:00:10.552370   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 19:00:10.552380   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 19:00:10.552435   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 19:00:10.552559   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 19:00:10.552568   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 19:00:10.552588   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 19:00:10.552646   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 19:00:10.552653   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 19:00:10.552669   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 19:00:10.552714   71183 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.embed-certs-836868 san=[127.0.0.1 192.168.39.107 embed-certs-836868 localhost minikube]
	I0910 19:00:10.610073   71183 provision.go:177] copyRemoteCerts
	I0910 19:00:10.610132   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:00:10.610153   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.612881   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613264   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.613301   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.613695   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.613863   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.613980   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:10.695479   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 19:00:10.719380   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0910 19:00:10.744099   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:00:10.767849   71183 provision.go:87] duration metric: took 221.638443ms to configureAuth
	I0910 19:00:10.767873   71183 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:00:10.768065   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:10.768150   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.770831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.771178   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771338   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.771539   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771702   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771825   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.771952   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.772106   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.772120   71183 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 19:00:10.992528   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 19:00:10.992568   71183 machine.go:96] duration metric: took 786.857321ms to provisionDockerMachine
	I0910 19:00:10.992583   71183 start.go:293] postStartSetup for "embed-certs-836868" (driver="kvm2")
	I0910 19:00:10.992598   71183 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:00:10.992630   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.992999   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:00:10.993030   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.995361   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995745   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.995777   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995925   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.996100   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.996212   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.996375   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.079205   71183 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:00:11.083998   71183 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:00:11.084028   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 19:00:11.084089   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 19:00:11.084158   71183 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 19:00:11.084241   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:00:11.093150   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:11.116894   71183 start.go:296] duration metric: took 124.294668ms for postStartSetup
	I0910 19:00:11.116938   71183 fix.go:56] duration metric: took 19.934731446s for fixHost
	I0910 19:00:11.116962   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.119482   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119784   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.119821   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.120176   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120331   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120501   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.120645   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:11.120868   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:11.120883   71183 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:00:11.217542   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994811.172877822
	
	I0910 19:00:11.217570   71183 fix.go:216] guest clock: 1725994811.172877822
	I0910 19:00:11.217577   71183 fix.go:229] Guest: 2024-09-10 19:00:11.172877822 +0000 UTC Remote: 2024-09-10 19:00:11.116943488 +0000 UTC m=+358.948412200 (delta=55.934334ms)
	I0910 19:00:11.217603   71183 fix.go:200] guest clock delta is within tolerance: 55.934334ms
	I0910 19:00:11.217607   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 20.035440196s
	I0910 19:00:11.217627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.217861   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:11.220855   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221282   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.221313   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221533   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222074   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222277   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222354   71183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 19:00:11.222402   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.222528   71183 ssh_runner.go:195] Run: cat /version.json
	I0910 19:00:11.222570   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.225205   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.225565   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225581   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225753   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.225934   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226035   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.226062   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.226109   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226207   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.226283   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.226370   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226535   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226668   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.297642   71183 ssh_runner.go:195] Run: systemctl --version
	I0910 19:00:11.322486   71183 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 19:00:11.470402   71183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 19:00:11.477843   71183 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:00:11.477903   71183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:00:11.495518   71183 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:00:11.495542   71183 start.go:495] detecting cgroup driver to use...
	I0910 19:00:11.495597   71183 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:00:11.512467   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:00:11.526665   71183 docker.go:217] disabling cri-docker service (if available) ...
	I0910 19:00:11.526732   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 19:00:11.540445   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 19:00:11.554386   71183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 19:00:11.682012   71183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 19:00:11.846239   71183 docker.go:233] disabling docker service ...
	I0910 19:00:11.846303   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 19:00:11.860981   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 19:00:11.874271   71183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 19:00:12.005716   71183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 19:00:12.137151   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 19:00:12.151156   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:00:12.170086   71183 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 19:00:12.170150   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.180741   71183 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 19:00:12.180804   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.190933   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.200885   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:07.840772   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.341153   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.840737   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.340471   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.840262   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.340827   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.840645   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.340524   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.840521   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.340560   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.210950   71183 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:00:12.221730   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.232931   71183 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.251318   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.261473   71183 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:00:12.270818   71183 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 19:00:12.270873   71183 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 19:00:12.284581   71183 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:00:12.294214   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:12.424646   71183 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 19:00:12.517553   71183 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 19:00:12.517633   71183 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 19:00:12.522728   71183 start.go:563] Will wait 60s for crictl version
	I0910 19:00:12.522775   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:00:12.526754   71183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:00:12.569377   71183 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 19:00:12.569454   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.597783   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.632619   71183 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 19:00:12.725298   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:15.223906   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:14.035868   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:16.534058   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.633800   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:12.637104   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637447   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:12.637476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637684   71183 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 19:00:12.641996   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:12.654577   71183 kubeadm.go:883] updating cluster {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:00:12.654684   71183 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:00:12.654737   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:12.694585   71183 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 19:00:12.694644   71183 ssh_runner.go:195] Run: which lz4
	I0910 19:00:12.699764   71183 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 19:00:12.705406   71183 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:00:12.705437   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 19:00:14.054131   71183 crio.go:462] duration metric: took 1.354391682s to copy over tarball
	I0910 19:00:14.054206   71183 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 19:00:16.114941   71183 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.06070257s)
	I0910 19:00:16.114968   71183 crio.go:469] duration metric: took 2.060808083s to extract the tarball
	I0910 19:00:16.114978   71183 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 19:00:16.153934   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:16.199988   71183 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 19:00:16.200008   71183 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:00:16.200015   71183 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.0 crio true true} ...
	I0910 19:00:16.200109   71183 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-836868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:00:16.200168   71183 ssh_runner.go:195] Run: crio config
	I0910 19:00:16.249409   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:16.249430   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:16.249443   71183 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 19:00:16.249462   71183 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-836868 NodeName:embed-certs-836868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:00:16.249596   71183 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-836868"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:00:16.249652   71183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:00:16.265984   71183 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:00:16.266062   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:00:16.276007   71183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0910 19:00:16.291971   71183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:00:16.307712   71183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0910 19:00:16.323789   71183 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0910 19:00:16.327478   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:16.339545   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:16.470249   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:16.487798   71183 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868 for IP: 192.168.39.107
	I0910 19:00:16.487838   71183 certs.go:194] generating shared ca certs ...
	I0910 19:00:16.487858   71183 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:16.488058   71183 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 19:00:16.488110   71183 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 19:00:16.488124   71183 certs.go:256] generating profile certs ...
	I0910 19:00:16.488243   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/client.key
	I0910 19:00:16.488307   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key.04acd22a
	I0910 19:00:16.488355   71183 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key
	I0910 19:00:16.488507   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 19:00:16.488547   71183 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 19:00:16.488560   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 19:00:16.488593   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 19:00:16.488633   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 19:00:16.488669   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 19:00:16.488856   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:16.489528   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:00:16.529980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 19:00:16.568653   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:00:16.593924   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 19:00:16.628058   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0910 19:00:16.669209   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:00:16.693274   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:00:16.716323   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:00:16.740155   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:00:16.763908   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 19:00:16.787980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 19:00:16.811754   71183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:00:16.828151   71183 ssh_runner.go:195] Run: openssl version
	I0910 19:00:16.834095   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:00:16.845376   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850178   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850230   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.856507   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:00:16.868105   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 19:00:16.879950   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884778   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884823   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.890715   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 19:00:16.903523   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 19:00:16.914585   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919105   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919151   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.924965   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:00:16.935579   71183 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:00:16.939895   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:00:16.945595   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:00:16.951247   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:00:16.956938   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:00:16.962908   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:00:16.968664   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:00:16.974624   71183 kubeadm.go:392] StartCluster: {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:00:16.974725   71183 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 19:00:16.974778   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.012869   71183 cri.go:89] found id: ""
	I0910 19:00:17.012947   71183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:00:17.023781   71183 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 19:00:17.023798   71183 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 19:00:17.023846   71183 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 19:00:17.034549   71183 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:00:17.035566   71183 kubeconfig.go:125] found "embed-certs-836868" server: "https://192.168.39.107:8443"
	I0910 19:00:17.037751   71183 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:00:17.047667   71183 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.107
	I0910 19:00:17.047696   71183 kubeadm.go:1160] stopping kube-system containers ...
	I0910 19:00:17.047708   71183 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 19:00:17.047747   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.083130   71183 cri.go:89] found id: ""
	I0910 19:00:17.083200   71183 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 19:00:17.101035   71183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:00:17.111335   71183 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:00:17.111357   71183 kubeadm.go:157] found existing configuration files:
	
	I0910 19:00:17.111414   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:00:17.120543   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:00:17.120593   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:00:17.130938   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:00:17.140688   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:00:17.140747   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:00:17.150637   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.160483   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:00:17.160520   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.170417   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:00:17.179778   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:00:17.179827   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:00:17.189197   71183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:00:17.199264   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:12.841060   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.340347   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.841136   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.840913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.341205   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.840692   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.340839   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.841050   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.341340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.224985   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:19.231248   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:18.534658   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:20.534807   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:17.309791   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.257162   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.482216   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.555094   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.645089   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:18.645178   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.146266   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.645546   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.146275   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.645291   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.662158   71183 api_server.go:72] duration metric: took 2.017082575s to wait for apiserver process to appear ...
	I0910 19:00:20.662183   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:00:20.662204   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:17.840510   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.340821   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.841156   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.340316   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.840339   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.341140   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.841333   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.340342   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.840282   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:22.340361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.326005   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.326036   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.326048   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.346004   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.346035   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.662353   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.669314   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:23.669344   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.162975   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.170262   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:24.170298   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.662865   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.667320   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:00:24.674393   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:00:24.674418   71183 api_server.go:131] duration metric: took 4.01222766s to wait for apiserver health ...
	I0910 19:00:24.674427   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:24.674433   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:24.676229   71183 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:00:24.677519   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:00:24.692951   71183 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:00:24.718355   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:00:24.732731   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:00:24.732758   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:00:24.732764   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:00:24.732775   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:00:24.732781   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:00:24.732798   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 19:00:24.732808   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:00:24.732817   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:00:24.732823   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:00:24.732835   71183 system_pods.go:74] duration metric: took 14.459216ms to wait for pod list to return data ...
	I0910 19:00:24.732846   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:00:24.742472   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:00:24.742497   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:00:24.742507   71183 node_conditions.go:105] duration metric: took 9.657853ms to run NodePressure ...
	I0910 19:00:24.742523   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:25.021719   71183 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026163   71183 kubeadm.go:739] kubelet initialised
	I0910 19:00:25.026187   71183 kubeadm.go:740] duration metric: took 4.442058ms waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026196   71183 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:25.030895   71183 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.035021   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035044   71183 pod_ready.go:82] duration metric: took 4.12756ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.035055   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035064   71183 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.039362   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039381   71183 pod_ready.go:82] duration metric: took 4.309293ms for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.039389   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039394   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.049142   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049164   71183 pod_ready.go:82] duration metric: took 9.762471ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.049175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049182   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.122255   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122285   71183 pod_ready.go:82] duration metric: took 73.09407ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.122295   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122301   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.522122   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522160   71183 pod_ready.go:82] duration metric: took 399.850787ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.522175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522185   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.921918   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921947   71183 pod_ready.go:82] duration metric: took 399.75274ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.921956   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921962   71183 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:26.322195   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322219   71183 pod_ready.go:82] duration metric: took 400.248825ms for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:26.322228   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322235   71183 pod_ready.go:39] duration metric: took 1.296028669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:26.322251   71183 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:00:26.333796   71183 ops.go:34] apiserver oom_adj: -16
	I0910 19:00:26.333824   71183 kubeadm.go:597] duration metric: took 9.310018521s to restartPrimaryControlPlane
	I0910 19:00:26.333834   71183 kubeadm.go:394] duration metric: took 9.359219145s to StartCluster
	I0910 19:00:26.333850   71183 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.333920   71183 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:00:26.336496   71183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.336792   71183 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:00:26.336863   71183 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:00:26.336935   71183 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-836868"
	I0910 19:00:26.336969   71183 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-836868"
	W0910 19:00:26.336980   71183 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:00:26.336995   71183 addons.go:69] Setting default-storageclass=true in profile "embed-certs-836868"
	I0910 19:00:26.337050   71183 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-836868"
	I0910 19:00:26.337058   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:26.337050   71183 addons.go:69] Setting metrics-server=true in profile "embed-certs-836868"
	I0910 19:00:26.337011   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337146   71183 addons.go:234] Setting addon metrics-server=true in "embed-certs-836868"
	W0910 19:00:26.337165   71183 addons.go:243] addon metrics-server should already be in state true
	I0910 19:00:26.337234   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337501   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337547   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337552   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337583   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337638   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337677   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.339741   71183 out.go:177] * Verifying Kubernetes components...
	I0910 19:00:26.341792   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:26.354154   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0910 19:00:26.354750   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.355345   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.355379   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.355756   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.356316   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0910 19:00:26.356389   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.356428   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.356508   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35641
	I0910 19:00:26.356810   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.356893   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.357384   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.357411   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361164   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.361278   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.361302   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361363   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.361709   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.362446   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.362483   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.364762   71183 addons.go:234] Setting addon default-storageclass=true in "embed-certs-836868"
	W0910 19:00:26.364786   71183 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:00:26.364814   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.365165   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.365230   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.379158   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0910 19:00:26.379696   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.380235   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.380266   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.380654   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.380865   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.382030   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0910 19:00:26.382358   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.382892   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.382912   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.382928   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.383271   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.383441   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.385129   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.385171   71183 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:00:26.385687   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0910 19:00:26.386001   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.386217   71183 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:00:21.723833   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.724422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.724456   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.034262   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.035125   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:26.386227   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:00:26.386289   71183 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:00:26.386309   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.386518   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.386533   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.386931   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.387566   71183 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.387651   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:00:26.387672   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.387618   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.387760   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.389782   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.389941   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.390190   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.390263   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.390558   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.390744   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.390921   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.391058   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.391585   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391788   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.391941   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.392097   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.392256   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.404601   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0910 19:00:26.405167   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.406097   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.406655   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.407006   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.407163   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.409223   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.409437   71183 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.409454   71183 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:00:26.409470   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.412388   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.412812   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.412831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.413010   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.413177   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.413333   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.413474   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.533906   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:26.552203   71183 node_ready.go:35] waiting up to 6m0s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:26.687774   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:00:26.687804   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:00:26.690124   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.737647   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:00:26.737673   71183 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:00:26.739650   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.783096   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:26.783125   71183 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:00:26.828766   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:22.841048   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.341180   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.841325   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.340485   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.841340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.340935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.840886   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.340826   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.840344   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.341189   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.844896   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154733205s)
	I0910 19:00:27.844931   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105250764s)
	I0910 19:00:27.844944   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844969   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844979   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.844980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845406   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845420   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845434   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845446   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.845464   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.845471   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845702   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845733   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845747   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847084   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847101   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847110   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.847118   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.847308   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847323   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.852938   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.852956   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.853198   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.853219   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.853224   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.879527   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.05071539s)
	I0910 19:00:27.879577   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.879597   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880030   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880050   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880059   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.880081   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880381   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880405   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880416   71183 addons.go:475] Verifying addon metrics-server=true in "embed-certs-836868"
	I0910 19:00:27.880383   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.883034   71183 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:00:28.222881   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.223636   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.034633   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.884243   71183 addons.go:510] duration metric: took 1.547392632s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:00:28.556786   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:31.055519   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:27.840306   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.340657   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.841179   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.340881   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.840957   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.341260   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.841151   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.840360   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.341199   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.724435   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:35.223194   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.533611   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:34.534941   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.034007   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:33.056381   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:34.056156   71183 node_ready.go:49] node "embed-certs-836868" has status "Ready":"True"
	I0910 19:00:34.056191   71183 node_ready.go:38] duration metric: took 7.503955102s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:34.056200   71183 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:34.063331   71183 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068294   71183 pod_ready.go:93] pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:34.068322   71183 pod_ready.go:82] duration metric: took 4.96275ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068335   71183 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:36.077798   71183 pod_ready.go:103] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.841192   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.340518   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.840995   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.341016   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.840480   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.340647   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.840416   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.340921   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.340956   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.224065   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.723852   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.533725   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.534430   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.576189   71183 pod_ready.go:93] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.576218   71183 pod_ready.go:82] duration metric: took 3.507872898s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.576238   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582150   71183 pod_ready.go:93] pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.582167   71183 pod_ready.go:82] duration metric: took 5.921544ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582175   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586941   71183 pod_ready.go:93] pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.586956   71183 pod_ready.go:82] duration metric: took 4.774648ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586963   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591829   71183 pod_ready.go:93] pod "kube-proxy-4fddv" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.591846   71183 pod_ready.go:82] duration metric: took 4.876938ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591854   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657930   71183 pod_ready.go:93] pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.657952   71183 pod_ready.go:82] duration metric: took 66.092785ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657962   71183 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:39.665465   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.841210   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.341302   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.340558   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.840395   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.341022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.841093   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.341228   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.841103   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.340329   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.223446   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.223533   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.224840   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.033565   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.034402   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.164336   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.164983   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:42.841000   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.341147   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.840534   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.340988   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.340859   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.840877   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.841175   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:47.341064   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.722930   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.723539   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.036816   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.534367   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.667433   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:51.164114   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:47.841037   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.341204   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.840961   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.340679   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.841173   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.340751   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.841158   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.340999   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.840349   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.340383   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.723945   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.224168   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.034234   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.533690   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.164294   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.666369   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:52.840991   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.340439   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.840487   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.340407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.840619   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.340844   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.841190   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.340927   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.724247   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.223715   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:58.033639   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.034297   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.670234   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.164278   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.164755   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.840798   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.340905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.841330   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.340743   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.840256   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.340970   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.840732   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:01.340927   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:01.341014   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:01.378922   72122 cri.go:89] found id: ""
	I0910 19:01:01.378953   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.378964   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:01.378971   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:01.379032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:01.413274   72122 cri.go:89] found id: ""
	I0910 19:01:01.413302   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.413313   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:01.413320   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:01.413383   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:01.449165   72122 cri.go:89] found id: ""
	I0910 19:01:01.449204   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.449215   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:01.449221   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:01.449291   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:01.484627   72122 cri.go:89] found id: ""
	I0910 19:01:01.484650   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.484657   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:01.484663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:01.484720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:01.519332   72122 cri.go:89] found id: ""
	I0910 19:01:01.519357   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.519364   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:01.519370   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:01.519424   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:01.554080   72122 cri.go:89] found id: ""
	I0910 19:01:01.554102   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.554109   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:01.554114   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:01.554160   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:01.590100   72122 cri.go:89] found id: ""
	I0910 19:01:01.590131   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.590143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:01.590149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:01.590208   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:01.623007   72122 cri.go:89] found id: ""
	I0910 19:01:01.623034   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.623045   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:01.623055   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:01.623070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:01.679940   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:01.679971   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:01.694183   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:01.694218   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:01.826997   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:01.827025   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:01.827038   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:01.903885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:01.903926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:02.224039   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.224422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.533395   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:05.034075   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.665680   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:06.665874   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.450792   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:04.471427   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:04.471501   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:04.521450   72122 cri.go:89] found id: ""
	I0910 19:01:04.521484   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.521494   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:04.521503   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:04.521562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:04.577588   72122 cri.go:89] found id: ""
	I0910 19:01:04.577622   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.577633   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:04.577641   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:04.577707   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:04.615558   72122 cri.go:89] found id: ""
	I0910 19:01:04.615586   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.615594   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:04.615599   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:04.615652   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:04.655763   72122 cri.go:89] found id: ""
	I0910 19:01:04.655793   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.655806   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:04.655815   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:04.655881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:04.692620   72122 cri.go:89] found id: ""
	I0910 19:01:04.692642   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.692649   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:04.692654   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:04.692709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:04.730575   72122 cri.go:89] found id: ""
	I0910 19:01:04.730601   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.730611   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:04.730616   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:04.730665   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:04.766716   72122 cri.go:89] found id: ""
	I0910 19:01:04.766742   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.766749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:04.766754   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:04.766799   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:04.808122   72122 cri.go:89] found id: ""
	I0910 19:01:04.808151   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.808162   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:04.808173   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:04.808185   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:04.858563   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:04.858592   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:04.872323   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:04.872350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:04.942541   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:04.942571   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:04.942588   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:05.022303   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:05.022338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:06.723760   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:08.724550   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.223094   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.533060   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.534466   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:12.034244   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.163526   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.164502   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.562092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:07.575254   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:07.575308   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:07.616583   72122 cri.go:89] found id: ""
	I0910 19:01:07.616607   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.616615   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:07.616620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:07.616676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:07.654676   72122 cri.go:89] found id: ""
	I0910 19:01:07.654700   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.654711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:07.654718   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:07.654790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:07.690054   72122 cri.go:89] found id: ""
	I0910 19:01:07.690085   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.690096   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:07.690104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:07.690171   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:07.724273   72122 cri.go:89] found id: ""
	I0910 19:01:07.724295   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.724302   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:07.724307   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:07.724363   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:07.757621   72122 cri.go:89] found id: ""
	I0910 19:01:07.757646   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.757654   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:07.757660   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:07.757716   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:07.791502   72122 cri.go:89] found id: ""
	I0910 19:01:07.791533   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.791543   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:07.791557   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:07.791620   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:07.825542   72122 cri.go:89] found id: ""
	I0910 19:01:07.825577   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.825586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:07.825592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:07.825649   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:07.862278   72122 cri.go:89] found id: ""
	I0910 19:01:07.862303   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.862312   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:07.862320   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:07.862331   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:07.952016   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:07.952059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:07.997004   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:07.997034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:08.047745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:08.047783   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:08.064712   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:08.064736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:08.136822   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:10.637017   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:10.650113   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:10.650198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:10.687477   72122 cri.go:89] found id: ""
	I0910 19:01:10.687504   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.687513   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:10.687520   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:10.687594   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:10.721410   72122 cri.go:89] found id: ""
	I0910 19:01:10.721437   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.721447   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:10.721455   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:10.721514   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:10.757303   72122 cri.go:89] found id: ""
	I0910 19:01:10.757330   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.757338   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:10.757343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:10.757396   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:10.794761   72122 cri.go:89] found id: ""
	I0910 19:01:10.794788   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.794799   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:10.794806   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:10.794885   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:10.828631   72122 cri.go:89] found id: ""
	I0910 19:01:10.828657   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.828668   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:10.828675   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:10.828737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:10.863609   72122 cri.go:89] found id: ""
	I0910 19:01:10.863634   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.863641   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:10.863646   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:10.863734   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:10.899299   72122 cri.go:89] found id: ""
	I0910 19:01:10.899324   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.899335   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:10.899342   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:10.899403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:10.939233   72122 cri.go:89] found id: ""
	I0910 19:01:10.939259   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.939268   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:10.939277   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:10.939290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:10.976599   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:10.976627   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:11.029099   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:11.029144   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:11.045401   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:11.045426   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:11.119658   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:11.119679   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:11.119696   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:13.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.723673   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:14.034325   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:16.534463   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.663847   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.664387   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.698696   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:13.712317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:13.712386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:13.747442   72122 cri.go:89] found id: ""
	I0910 19:01:13.747470   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.747480   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:13.747487   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:13.747555   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:13.782984   72122 cri.go:89] found id: ""
	I0910 19:01:13.783008   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.783015   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:13.783021   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:13.783078   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:13.820221   72122 cri.go:89] found id: ""
	I0910 19:01:13.820245   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.820256   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:13.820262   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:13.820322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:13.854021   72122 cri.go:89] found id: ""
	I0910 19:01:13.854056   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.854068   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:13.854075   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:13.854138   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:13.888292   72122 cri.go:89] found id: ""
	I0910 19:01:13.888321   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.888331   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:13.888338   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:13.888398   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:13.922301   72122 cri.go:89] found id: ""
	I0910 19:01:13.922330   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.922341   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:13.922349   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:13.922408   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:13.959977   72122 cri.go:89] found id: ""
	I0910 19:01:13.960002   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.960010   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:13.960015   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:13.960074   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:13.995255   72122 cri.go:89] found id: ""
	I0910 19:01:13.995282   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.995293   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:13.995308   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:13.995323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:14.050760   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:14.050790   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:14.064694   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:14.064723   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:14.137406   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:14.137431   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:14.137447   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:14.216624   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:14.216657   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:16.765643   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:16.778746   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:16.778821   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:16.814967   72122 cri.go:89] found id: ""
	I0910 19:01:16.814999   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.815010   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:16.815017   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:16.815073   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:16.850306   72122 cri.go:89] found id: ""
	I0910 19:01:16.850334   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.850345   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:16.850352   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:16.850413   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:16.886104   72122 cri.go:89] found id: ""
	I0910 19:01:16.886134   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.886144   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:16.886152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:16.886218   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:16.921940   72122 cri.go:89] found id: ""
	I0910 19:01:16.921968   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.921977   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:16.921983   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:16.922032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:16.956132   72122 cri.go:89] found id: ""
	I0910 19:01:16.956166   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.956177   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:16.956185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:16.956247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:16.988240   72122 cri.go:89] found id: ""
	I0910 19:01:16.988269   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.988278   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:16.988284   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:16.988330   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:17.022252   72122 cri.go:89] found id: ""
	I0910 19:01:17.022281   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.022291   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:17.022297   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:17.022364   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:17.058664   72122 cri.go:89] found id: ""
	I0910 19:01:17.058693   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.058703   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:17.058715   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:17.058740   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:17.136927   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:17.136964   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:17.189427   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:17.189457   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:17.242193   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:17.242225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:17.257878   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:17.257908   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:17.330096   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:17.724465   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.224230   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:18.534806   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:21.034368   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:17.667897   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.165174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.165421   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:19.831030   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:19.844516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:19.844581   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:19.879878   72122 cri.go:89] found id: ""
	I0910 19:01:19.879908   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.879919   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:19.879927   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:19.879988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:19.915992   72122 cri.go:89] found id: ""
	I0910 19:01:19.916018   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.916025   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:19.916030   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:19.916084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:19.949206   72122 cri.go:89] found id: ""
	I0910 19:01:19.949232   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.949242   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:19.949249   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:19.949311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:19.983011   72122 cri.go:89] found id: ""
	I0910 19:01:19.983035   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.983043   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:19.983048   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:19.983096   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:20.018372   72122 cri.go:89] found id: ""
	I0910 19:01:20.018394   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.018402   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:20.018408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:20.018466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:20.053941   72122 cri.go:89] found id: ""
	I0910 19:01:20.053967   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.053975   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:20.053980   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:20.054037   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:20.084999   72122 cri.go:89] found id: ""
	I0910 19:01:20.085026   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.085035   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:20.085042   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:20.085115   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:20.124036   72122 cri.go:89] found id: ""
	I0910 19:01:20.124063   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.124072   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:20.124086   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:20.124103   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:20.176917   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:20.176944   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:20.190831   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:20.190852   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:20.257921   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:20.257942   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:20.257954   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:20.335320   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:20.335350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:22.723788   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.223765   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:23.034456   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.534821   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:24.663208   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:26.664282   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.875167   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:22.888803   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:22.888858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:22.922224   72122 cri.go:89] found id: ""
	I0910 19:01:22.922252   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.922264   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:22.922270   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:22.922328   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:22.959502   72122 cri.go:89] found id: ""
	I0910 19:01:22.959536   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.959546   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:22.959553   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:22.959619   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:22.992914   72122 cri.go:89] found id: ""
	I0910 19:01:22.992944   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.992955   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:22.992962   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:22.993022   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:23.028342   72122 cri.go:89] found id: ""
	I0910 19:01:23.028367   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.028376   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:23.028384   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:23.028443   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:23.064715   72122 cri.go:89] found id: ""
	I0910 19:01:23.064742   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.064753   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:23.064761   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:23.064819   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:23.100752   72122 cri.go:89] found id: ""
	I0910 19:01:23.100781   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.100789   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:23.100795   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:23.100857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:23.136017   72122 cri.go:89] found id: ""
	I0910 19:01:23.136045   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.136055   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:23.136062   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:23.136108   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:23.170787   72122 cri.go:89] found id: ""
	I0910 19:01:23.170811   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.170819   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:23.170826   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:23.170840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:23.210031   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:23.210059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:23.261525   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:23.261557   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:23.275611   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:23.275636   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:23.348543   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:23.348568   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:23.348582   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:25.929406   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:25.942658   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:25.942737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:25.977231   72122 cri.go:89] found id: ""
	I0910 19:01:25.977260   72122 logs.go:276] 0 containers: []
	W0910 19:01:25.977270   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:25.977277   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:25.977336   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:26.015060   72122 cri.go:89] found id: ""
	I0910 19:01:26.015093   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.015103   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:26.015110   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:26.015180   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:26.053618   72122 cri.go:89] found id: ""
	I0910 19:01:26.053643   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.053651   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:26.053656   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:26.053712   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:26.090489   72122 cri.go:89] found id: ""
	I0910 19:01:26.090515   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.090523   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:26.090529   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:26.090587   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:26.126687   72122 cri.go:89] found id: ""
	I0910 19:01:26.126710   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.126718   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:26.126723   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:26.126771   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:26.160901   72122 cri.go:89] found id: ""
	I0910 19:01:26.160939   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.160951   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:26.160959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:26.161017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:26.195703   72122 cri.go:89] found id: ""
	I0910 19:01:26.195728   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.195737   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:26.195743   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:26.195794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:26.230394   72122 cri.go:89] found id: ""
	I0910 19:01:26.230414   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.230422   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:26.230430   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:26.230444   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:26.296884   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:26.296905   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:26.296926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:26.371536   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:26.371569   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:26.412926   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:26.412958   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:26.462521   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:26.462550   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:27.725957   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.224312   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.034338   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.034794   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.035284   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.668205   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:31.166271   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.976550   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:28.989517   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:28.989586   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:29.025638   72122 cri.go:89] found id: ""
	I0910 19:01:29.025662   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.025671   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:29.025677   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:29.025719   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:29.067473   72122 cri.go:89] found id: ""
	I0910 19:01:29.067495   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.067502   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:29.067507   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:29.067556   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:29.105587   72122 cri.go:89] found id: ""
	I0910 19:01:29.105616   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.105628   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:29.105635   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:29.105696   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:29.142427   72122 cri.go:89] found id: ""
	I0910 19:01:29.142458   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.142468   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:29.142474   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:29.142530   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:29.178553   72122 cri.go:89] found id: ""
	I0910 19:01:29.178575   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.178582   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:29.178587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:29.178638   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:29.212997   72122 cri.go:89] found id: ""
	I0910 19:01:29.213025   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.213034   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:29.213040   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:29.213109   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:29.247057   72122 cri.go:89] found id: ""
	I0910 19:01:29.247083   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.247091   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:29.247097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:29.247151   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:29.285042   72122 cri.go:89] found id: ""
	I0910 19:01:29.285084   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.285096   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:29.285107   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:29.285131   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:29.336003   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:29.336033   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:29.349867   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:29.349890   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:29.422006   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:29.422028   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:29.422043   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:29.504047   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:29.504079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:32.050723   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:32.063851   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:32.063904   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:32.100816   72122 cri.go:89] found id: ""
	I0910 19:01:32.100841   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.100851   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:32.100858   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:32.100924   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:32.134863   72122 cri.go:89] found id: ""
	I0910 19:01:32.134892   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.134902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:32.134909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:32.134967   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:32.169873   72122 cri.go:89] found id: ""
	I0910 19:01:32.169901   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.169912   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:32.169919   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:32.169973   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:32.202161   72122 cri.go:89] found id: ""
	I0910 19:01:32.202187   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.202197   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:32.202204   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:32.202264   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:32.236850   72122 cri.go:89] found id: ""
	I0910 19:01:32.236879   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.236888   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:32.236896   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:32.236957   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:32.271479   72122 cri.go:89] found id: ""
	I0910 19:01:32.271511   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.271530   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:32.271542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:32.271701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:32.306724   72122 cri.go:89] found id: ""
	I0910 19:01:32.306747   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.306754   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:32.306760   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:32.306811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:32.341153   72122 cri.go:89] found id: ""
	I0910 19:01:32.341184   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.341195   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:32.341206   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:32.341221   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:32.393087   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:32.393122   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:32.406565   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:32.406591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:32.478030   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:32.478048   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:32.478079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:32.224371   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.723372   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.533510   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:37.033933   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:33.671725   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:36.165396   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.568440   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:32.568478   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:35.112022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:35.125210   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:35.125286   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:35.160716   72122 cri.go:89] found id: ""
	I0910 19:01:35.160743   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.160753   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:35.160759   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:35.160817   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:35.196500   72122 cri.go:89] found id: ""
	I0910 19:01:35.196530   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.196541   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:35.196548   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:35.196622   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:35.232476   72122 cri.go:89] found id: ""
	I0910 19:01:35.232510   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.232521   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:35.232528   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:35.232590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:35.269612   72122 cri.go:89] found id: ""
	I0910 19:01:35.269635   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.269644   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:35.269649   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:35.269697   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:35.307368   72122 cri.go:89] found id: ""
	I0910 19:01:35.307393   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.307401   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:35.307408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:35.307475   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:35.342079   72122 cri.go:89] found id: ""
	I0910 19:01:35.342108   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.342119   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:35.342126   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:35.342188   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:35.379732   72122 cri.go:89] found id: ""
	I0910 19:01:35.379761   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.379771   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:35.379778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:35.379840   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:35.419067   72122 cri.go:89] found id: ""
	I0910 19:01:35.419098   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.419109   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:35.419120   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:35.419139   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:35.472459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:35.472494   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:35.487044   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:35.487078   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:35.565242   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:35.565264   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:35.565282   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:35.645918   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:35.645951   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:36.724330   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.724368   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.224272   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:39.533968   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.534579   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.666059   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.164158   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.189238   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:38.203973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:38.204035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:38.241402   72122 cri.go:89] found id: ""
	I0910 19:01:38.241428   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.241438   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:38.241446   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:38.241506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:38.280657   72122 cri.go:89] found id: ""
	I0910 19:01:38.280685   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.280693   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:38.280698   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:38.280753   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:38.319697   72122 cri.go:89] found id: ""
	I0910 19:01:38.319725   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.319735   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:38.319742   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:38.319804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:38.356766   72122 cri.go:89] found id: ""
	I0910 19:01:38.356799   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.356810   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:38.356817   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:38.356876   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:38.395468   72122 cri.go:89] found id: ""
	I0910 19:01:38.395497   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.395508   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:38.395516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:38.395577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:38.434942   72122 cri.go:89] found id: ""
	I0910 19:01:38.434965   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.434974   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:38.434979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:38.435025   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:38.470687   72122 cri.go:89] found id: ""
	I0910 19:01:38.470715   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.470724   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:38.470729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:38.470777   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:38.505363   72122 cri.go:89] found id: ""
	I0910 19:01:38.505394   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.505405   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:38.505417   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:38.505432   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:38.557735   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:38.557770   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:38.586094   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:38.586128   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:38.665190   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:38.665215   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:38.665231   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:38.743748   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:38.743779   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:41.284310   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:41.299086   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:41.299157   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:41.340453   72122 cri.go:89] found id: ""
	I0910 19:01:41.340476   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.340484   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:41.340489   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:41.340544   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:41.374028   72122 cri.go:89] found id: ""
	I0910 19:01:41.374052   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.374060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:41.374066   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:41.374117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:41.413888   72122 cri.go:89] found id: ""
	I0910 19:01:41.413915   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.413929   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:41.413935   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:41.413994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:41.450846   72122 cri.go:89] found id: ""
	I0910 19:01:41.450873   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.450883   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:41.450890   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:41.450950   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:41.484080   72122 cri.go:89] found id: ""
	I0910 19:01:41.484107   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.484115   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:41.484120   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:41.484168   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:41.523652   72122 cri.go:89] found id: ""
	I0910 19:01:41.523677   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.523685   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:41.523690   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:41.523749   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:41.563690   72122 cri.go:89] found id: ""
	I0910 19:01:41.563715   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.563727   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:41.563734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:41.563797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:41.602101   72122 cri.go:89] found id: ""
	I0910 19:01:41.602122   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.602130   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:41.602137   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:41.602152   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:41.655459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:41.655488   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:41.670037   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:41.670062   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:41.741399   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:41.741417   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:41.741428   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:41.817411   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:41.817445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:43.726285   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.223867   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.034404   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.533246   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:43.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.164675   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.363631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:44.378279   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:44.378344   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:44.412450   72122 cri.go:89] found id: ""
	I0910 19:01:44.412486   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.412495   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:44.412502   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:44.412569   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:44.448378   72122 cri.go:89] found id: ""
	I0910 19:01:44.448407   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.448415   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:44.448420   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:44.448470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:44.483478   72122 cri.go:89] found id: ""
	I0910 19:01:44.483516   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.483524   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:44.483530   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:44.483584   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:44.517787   72122 cri.go:89] found id: ""
	I0910 19:01:44.517812   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.517822   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:44.517828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:44.517886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:44.554909   72122 cri.go:89] found id: ""
	I0910 19:01:44.554939   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.554950   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:44.554957   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:44.555018   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:44.589865   72122 cri.go:89] found id: ""
	I0910 19:01:44.589890   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.589909   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:44.589923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:44.589968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:44.626712   72122 cri.go:89] found id: ""
	I0910 19:01:44.626739   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.626749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:44.626756   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:44.626815   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:44.664985   72122 cri.go:89] found id: ""
	I0910 19:01:44.665067   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.665103   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:44.665114   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:44.665165   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:44.721160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:44.721196   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:44.735339   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:44.735366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:44.810056   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:44.810080   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:44.810094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:44.898822   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:44.898871   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:47.438440   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:47.451438   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:47.451506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:48.723661   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.723768   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.534671   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:51.033397   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.164739   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.665165   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:47.491703   72122 cri.go:89] found id: ""
	I0910 19:01:47.491729   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.491740   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:47.491747   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:47.491811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:47.526834   72122 cri.go:89] found id: ""
	I0910 19:01:47.526862   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.526874   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:47.526880   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:47.526940   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:47.570463   72122 cri.go:89] found id: ""
	I0910 19:01:47.570488   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.570496   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:47.570503   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:47.570545   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:47.608691   72122 cri.go:89] found id: ""
	I0910 19:01:47.608715   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.608727   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:47.608734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:47.608780   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:47.648279   72122 cri.go:89] found id: ""
	I0910 19:01:47.648308   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.648316   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:47.648324   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:47.648386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:47.684861   72122 cri.go:89] found id: ""
	I0910 19:01:47.684885   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.684892   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:47.684897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:47.684947   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:47.721004   72122 cri.go:89] found id: ""
	I0910 19:01:47.721037   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.721049   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:47.721056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:47.721134   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:47.756154   72122 cri.go:89] found id: ""
	I0910 19:01:47.756181   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.756192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:47.756202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:47.756217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:47.806860   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:47.806889   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:47.822419   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:47.822445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:47.891966   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:47.891986   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:47.892000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:47.978510   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:47.978561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.519264   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:50.533576   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:50.533630   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:50.567574   72122 cri.go:89] found id: ""
	I0910 19:01:50.567601   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.567612   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:50.567619   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:50.567678   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:50.608824   72122 cri.go:89] found id: ""
	I0910 19:01:50.608850   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.608858   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:50.608863   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:50.608939   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:50.644502   72122 cri.go:89] found id: ""
	I0910 19:01:50.644530   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.644538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:50.644544   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:50.644590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:50.682309   72122 cri.go:89] found id: ""
	I0910 19:01:50.682332   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.682340   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:50.682345   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:50.682404   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:50.735372   72122 cri.go:89] found id: ""
	I0910 19:01:50.735398   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.735410   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:50.735418   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:50.735482   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:50.786364   72122 cri.go:89] found id: ""
	I0910 19:01:50.786391   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.786401   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:50.786408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:50.786464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:50.831525   72122 cri.go:89] found id: ""
	I0910 19:01:50.831564   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.831575   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:50.831582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:50.831645   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:50.873457   72122 cri.go:89] found id: ""
	I0910 19:01:50.873482   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.873493   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:50.873503   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:50.873524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:50.956032   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:50.956069   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.996871   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:50.996904   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:51.047799   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:51.047824   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:51.061946   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:51.061970   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:51.136302   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:53.222492   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.223835   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.034478   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.532623   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:52.665991   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.164343   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.636464   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:53.649971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:53.650054   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:53.688172   72122 cri.go:89] found id: ""
	I0910 19:01:53.688201   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.688211   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:53.688217   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:53.688274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:53.725094   72122 cri.go:89] found id: ""
	I0910 19:01:53.725119   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.725128   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:53.725135   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:53.725196   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:53.763866   72122 cri.go:89] found id: ""
	I0910 19:01:53.763893   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.763907   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:53.763914   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:53.763971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:53.797760   72122 cri.go:89] found id: ""
	I0910 19:01:53.797787   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.797798   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:53.797805   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:53.797862   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:53.830305   72122 cri.go:89] found id: ""
	I0910 19:01:53.830332   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.830340   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:53.830346   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:53.830402   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:53.861970   72122 cri.go:89] found id: ""
	I0910 19:01:53.861995   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.862003   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:53.862009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:53.862059   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:53.896577   72122 cri.go:89] found id: ""
	I0910 19:01:53.896600   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.896609   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:53.896614   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:53.896660   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:53.935051   72122 cri.go:89] found id: ""
	I0910 19:01:53.935077   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.935086   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:53.935094   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:53.935105   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:53.950252   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:53.950276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:54.023327   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:54.023346   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:54.023361   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:54.101605   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:54.101643   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:54.142906   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:54.142930   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:56.697701   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:56.717755   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:56.717836   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:56.763564   72122 cri.go:89] found id: ""
	I0910 19:01:56.763594   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.763606   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:56.763613   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:56.763675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:56.815780   72122 cri.go:89] found id: ""
	I0910 19:01:56.815808   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.815816   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:56.815821   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:56.815883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:56.848983   72122 cri.go:89] found id: ""
	I0910 19:01:56.849013   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.849024   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:56.849032   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:56.849100   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:56.880660   72122 cri.go:89] found id: ""
	I0910 19:01:56.880690   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.880702   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:56.880709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:56.880756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:56.922836   72122 cri.go:89] found id: ""
	I0910 19:01:56.922860   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.922867   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:56.922873   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:56.922938   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:56.963474   72122 cri.go:89] found id: ""
	I0910 19:01:56.963505   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.963517   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:56.963524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:56.963585   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:56.996837   72122 cri.go:89] found id: ""
	I0910 19:01:56.996864   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.996872   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:56.996877   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:56.996925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:57.029594   72122 cri.go:89] found id: ""
	I0910 19:01:57.029629   72122 logs.go:276] 0 containers: []
	W0910 19:01:57.029640   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:57.029651   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:57.029664   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:57.083745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:57.083772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:57.099269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:57.099293   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:57.174098   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:57.174118   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:57.174129   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:57.258833   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:57.258869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:57.224384   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.722547   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.533178   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.533798   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.035089   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.665383   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:00.164920   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.800644   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:59.814728   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:59.814805   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:59.854081   72122 cri.go:89] found id: ""
	I0910 19:01:59.854113   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.854124   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:59.854133   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:59.854197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:59.889524   72122 cri.go:89] found id: ""
	I0910 19:01:59.889550   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.889560   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:59.889567   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:59.889626   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:59.925833   72122 cri.go:89] found id: ""
	I0910 19:01:59.925859   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.925866   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:59.925872   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:59.925935   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:59.962538   72122 cri.go:89] found id: ""
	I0910 19:01:59.962575   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.962586   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:59.962593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:59.962650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:59.996994   72122 cri.go:89] found id: ""
	I0910 19:01:59.997025   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.997037   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:59.997045   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:59.997126   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:00.032881   72122 cri.go:89] found id: ""
	I0910 19:02:00.032905   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.032915   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:00.032923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:00.032988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:00.065838   72122 cri.go:89] found id: ""
	I0910 19:02:00.065861   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.065869   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:00.065874   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:00.065927   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:00.099479   72122 cri.go:89] found id: ""
	I0910 19:02:00.099505   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.099516   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:00.099526   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:00.099540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:00.182661   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:00.182689   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:00.223514   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:00.223553   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:00.273695   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:00.273721   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:00.287207   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:00.287233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:00.353975   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:01.724647   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.224071   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.225475   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.534230   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.534474   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.665228   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.667935   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:07.163506   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.854145   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:02.867413   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:02.867484   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:02.904299   72122 cri.go:89] found id: ""
	I0910 19:02:02.904327   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.904335   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:02.904340   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:02.904392   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:02.940981   72122 cri.go:89] found id: ""
	I0910 19:02:02.941010   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.941019   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:02.941024   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:02.941099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:02.980013   72122 cri.go:89] found id: ""
	I0910 19:02:02.980038   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.980046   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:02.980052   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:02.980111   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:03.020041   72122 cri.go:89] found id: ""
	I0910 19:02:03.020071   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.020080   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:03.020087   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:03.020144   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:03.055228   72122 cri.go:89] found id: ""
	I0910 19:02:03.055264   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.055277   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:03.055285   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:03.055347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:03.088696   72122 cri.go:89] found id: ""
	I0910 19:02:03.088722   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.088730   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:03.088736   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:03.088787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:03.124753   72122 cri.go:89] found id: ""
	I0910 19:02:03.124776   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.124785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:03.124792   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:03.124849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:03.157191   72122 cri.go:89] found id: ""
	I0910 19:02:03.157222   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.157230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:03.157238   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:03.157248   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:03.239015   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:03.239044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:03.279323   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:03.279355   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:03.328034   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:03.328067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:03.341591   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:03.341620   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:03.411057   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:05.911503   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:05.924794   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:05.924868   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:05.958827   72122 cri.go:89] found id: ""
	I0910 19:02:05.958852   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.958859   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:05.958865   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:05.958920   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:05.992376   72122 cri.go:89] found id: ""
	I0910 19:02:05.992412   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.992423   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:05.992429   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:05.992485   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:06.028058   72122 cri.go:89] found id: ""
	I0910 19:02:06.028088   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.028098   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:06.028107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:06.028162   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:06.066428   72122 cri.go:89] found id: ""
	I0910 19:02:06.066458   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.066470   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:06.066477   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:06.066533   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:06.102750   72122 cri.go:89] found id: ""
	I0910 19:02:06.102774   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.102782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:06.102787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:06.102841   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:06.137216   72122 cri.go:89] found id: ""
	I0910 19:02:06.137243   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.137254   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:06.137261   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:06.137323   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:06.175227   72122 cri.go:89] found id: ""
	I0910 19:02:06.175251   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.175259   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:06.175265   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:06.175311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:06.210197   72122 cri.go:89] found id: ""
	I0910 19:02:06.210222   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.210230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:06.210238   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:06.210249   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:06.261317   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:06.261353   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:06.275196   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:06.275225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:06.354186   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:06.354205   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:06.354219   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:06.436726   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:06.436763   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:08.723505   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:10.724499   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.035939   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.534648   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.166629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.666941   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:08.979157   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:08.992097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:08.992156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:09.025260   72122 cri.go:89] found id: ""
	I0910 19:02:09.025282   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.025289   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:09.025295   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:09.025360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:09.059139   72122 cri.go:89] found id: ""
	I0910 19:02:09.059166   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.059177   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:09.059186   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:09.059240   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:09.092935   72122 cri.go:89] found id: ""
	I0910 19:02:09.092964   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.092973   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:09.092979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:09.093027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:09.127273   72122 cri.go:89] found id: ""
	I0910 19:02:09.127299   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.127310   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:09.127317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:09.127367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:09.163353   72122 cri.go:89] found id: ""
	I0910 19:02:09.163380   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.163389   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:09.163396   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:09.163453   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:09.195371   72122 cri.go:89] found id: ""
	I0910 19:02:09.195396   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.195407   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:09.195414   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:09.195473   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:09.229338   72122 cri.go:89] found id: ""
	I0910 19:02:09.229361   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.229370   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:09.229376   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:09.229432   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:09.262822   72122 cri.go:89] found id: ""
	I0910 19:02:09.262847   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.262857   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:09.262874   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:09.262891   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:09.330079   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:09.330103   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:09.330119   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:09.408969   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:09.409003   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:09.447666   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:09.447702   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:09.501111   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:09.501141   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.016407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:12.030822   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:12.030905   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:12.069191   72122 cri.go:89] found id: ""
	I0910 19:02:12.069218   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.069229   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:12.069236   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:12.069306   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:12.103687   72122 cri.go:89] found id: ""
	I0910 19:02:12.103726   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.103737   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:12.103862   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:12.103937   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:12.142891   72122 cri.go:89] found id: ""
	I0910 19:02:12.142920   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.142932   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:12.142940   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:12.142998   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:12.178966   72122 cri.go:89] found id: ""
	I0910 19:02:12.178991   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.179002   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:12.179010   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:12.179069   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:12.216070   72122 cri.go:89] found id: ""
	I0910 19:02:12.216093   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.216104   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:12.216112   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:12.216161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:12.251447   72122 cri.go:89] found id: ""
	I0910 19:02:12.251479   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.251492   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:12.251500   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:12.251568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:12.284640   72122 cri.go:89] found id: ""
	I0910 19:02:12.284666   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.284677   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:12.284682   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:12.284743   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:12.319601   72122 cri.go:89] found id: ""
	I0910 19:02:12.319625   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.319632   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:12.319639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:12.319650   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:12.372932   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:12.372965   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.387204   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:12.387228   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:12.459288   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:12.459308   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:12.459323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:13.223679   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:15.224341   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:14.034036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.533341   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:13.667258   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.164610   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:12.549161   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:12.549198   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:15.092557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:15.105391   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:15.105456   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:15.139486   72122 cri.go:89] found id: ""
	I0910 19:02:15.139515   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.139524   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:15.139530   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:15.139591   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:15.173604   72122 cri.go:89] found id: ""
	I0910 19:02:15.173630   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.173641   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:15.173648   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:15.173710   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:15.208464   72122 cri.go:89] found id: ""
	I0910 19:02:15.208492   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.208503   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:15.208510   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:15.208568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:15.247536   72122 cri.go:89] found id: ""
	I0910 19:02:15.247567   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.247579   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:15.247586   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:15.247650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:15.285734   72122 cri.go:89] found id: ""
	I0910 19:02:15.285764   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.285775   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:15.285782   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:15.285858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:15.320755   72122 cri.go:89] found id: ""
	I0910 19:02:15.320782   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.320792   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:15.320798   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:15.320849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:15.357355   72122 cri.go:89] found id: ""
	I0910 19:02:15.357384   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.357395   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:15.357402   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:15.357463   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:15.392105   72122 cri.go:89] found id: ""
	I0910 19:02:15.392130   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.392137   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:15.392149   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:15.392160   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:15.444433   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:15.444465   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:15.458759   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:15.458784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:15.523490   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:15.523507   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:15.523524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:15.607584   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:15.607616   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:17.224472   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:19.723953   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.534545   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.667949   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.669762   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.146611   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:18.160311   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:18.160378   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:18.195072   72122 cri.go:89] found id: ""
	I0910 19:02:18.195099   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.195109   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:18.195127   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:18.195201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:18.230099   72122 cri.go:89] found id: ""
	I0910 19:02:18.230129   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.230138   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:18.230145   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:18.230201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:18.268497   72122 cri.go:89] found id: ""
	I0910 19:02:18.268525   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.268534   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:18.268539   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:18.268599   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:18.304929   72122 cri.go:89] found id: ""
	I0910 19:02:18.304966   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.304978   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:18.304985   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:18.305048   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:18.339805   72122 cri.go:89] found id: ""
	I0910 19:02:18.339839   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.339861   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:18.339868   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:18.339925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:18.378353   72122 cri.go:89] found id: ""
	I0910 19:02:18.378372   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.378381   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:18.378393   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:18.378438   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:18.415175   72122 cri.go:89] found id: ""
	I0910 19:02:18.415195   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.415203   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:18.415208   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:18.415262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:18.450738   72122 cri.go:89] found id: ""
	I0910 19:02:18.450762   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.450769   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:18.450778   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:18.450793   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:18.530943   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:18.530975   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:18.568983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:18.569021   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:18.622301   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:18.622336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:18.635788   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:18.635815   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:18.715729   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.216082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:21.229419   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:21.229488   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:21.265152   72122 cri.go:89] found id: ""
	I0910 19:02:21.265183   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.265193   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:21.265201   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:21.265262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:21.300766   72122 cri.go:89] found id: ""
	I0910 19:02:21.300797   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.300815   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:21.300823   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:21.300883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:21.333416   72122 cri.go:89] found id: ""
	I0910 19:02:21.333443   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.333452   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:21.333460   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:21.333526   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:21.371112   72122 cri.go:89] found id: ""
	I0910 19:02:21.371142   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.371150   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:21.371156   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:21.371214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:21.405657   72122 cri.go:89] found id: ""
	I0910 19:02:21.405684   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.405695   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:21.405703   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:21.405755   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:21.440354   72122 cri.go:89] found id: ""
	I0910 19:02:21.440381   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.440392   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:21.440400   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:21.440464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:21.480165   72122 cri.go:89] found id: ""
	I0910 19:02:21.480189   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.480199   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:21.480206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:21.480273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:21.518422   72122 cri.go:89] found id: ""
	I0910 19:02:21.518449   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.518459   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:21.518470   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:21.518486   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:21.572263   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:21.572300   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:21.588179   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:21.588204   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:21.658330   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.658356   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:21.658371   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:21.743026   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:21.743063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:21.724730   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.724844   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:26.225026   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.034593   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.037588   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.164712   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.664475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:24.286604   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:24.299783   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:24.299847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:24.336998   72122 cri.go:89] found id: ""
	I0910 19:02:24.337031   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.337042   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:24.337050   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:24.337123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:24.374198   72122 cri.go:89] found id: ""
	I0910 19:02:24.374223   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.374231   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:24.374236   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:24.374289   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:24.407783   72122 cri.go:89] found id: ""
	I0910 19:02:24.407812   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.407822   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:24.407828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:24.407881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:24.443285   72122 cri.go:89] found id: ""
	I0910 19:02:24.443307   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.443315   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:24.443321   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:24.443367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:24.477176   72122 cri.go:89] found id: ""
	I0910 19:02:24.477198   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.477206   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:24.477212   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:24.477266   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:24.509762   72122 cri.go:89] found id: ""
	I0910 19:02:24.509783   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.509791   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:24.509797   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:24.509858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:24.548746   72122 cri.go:89] found id: ""
	I0910 19:02:24.548775   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.548785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:24.548793   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:24.548851   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:24.583265   72122 cri.go:89] found id: ""
	I0910 19:02:24.583297   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.583313   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:24.583324   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:24.583338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:24.634966   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:24.635001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:24.649844   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:24.649869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:24.721795   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:24.721824   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:24.721840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:24.807559   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:24.807593   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.352779   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:27.366423   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:27.366495   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:27.399555   72122 cri.go:89] found id: ""
	I0910 19:02:27.399582   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.399591   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:27.399596   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:27.399662   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:27.434151   72122 cri.go:89] found id: ""
	I0910 19:02:27.434179   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.434188   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:27.434194   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:27.434265   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:27.467053   72122 cri.go:89] found id: ""
	I0910 19:02:27.467081   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.467092   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:27.467099   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:27.467156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:28.724149   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:31.224185   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.533697   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:29.533815   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.034343   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.667816   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:30.164174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.500999   72122 cri.go:89] found id: ""
	I0910 19:02:27.501030   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.501039   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:27.501044   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:27.501114   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:27.537981   72122 cri.go:89] found id: ""
	I0910 19:02:27.538000   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.538007   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:27.538012   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:27.538060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:27.568622   72122 cri.go:89] found id: ""
	I0910 19:02:27.568649   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.568660   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:27.568668   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:27.568724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:27.603035   72122 cri.go:89] found id: ""
	I0910 19:02:27.603058   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.603067   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:27.603072   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:27.603131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:27.637624   72122 cri.go:89] found id: ""
	I0910 19:02:27.637651   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.637662   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:27.637673   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:27.637693   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:27.651893   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:27.651915   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:27.723949   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:27.723969   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:27.723983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:27.801463   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:27.801496   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.841969   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:27.842000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.398857   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:30.412720   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:30.412790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:30.448125   72122 cri.go:89] found id: ""
	I0910 19:02:30.448152   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.448163   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:30.448171   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:30.448234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:30.481988   72122 cri.go:89] found id: ""
	I0910 19:02:30.482016   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.482027   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:30.482035   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:30.482083   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:30.516548   72122 cri.go:89] found id: ""
	I0910 19:02:30.516576   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.516583   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:30.516589   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:30.516646   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:30.566884   72122 cri.go:89] found id: ""
	I0910 19:02:30.566910   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.566918   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:30.566923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:30.566975   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:30.602278   72122 cri.go:89] found id: ""
	I0910 19:02:30.602306   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.602314   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:30.602319   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:30.602379   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:30.636708   72122 cri.go:89] found id: ""
	I0910 19:02:30.636732   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.636740   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:30.636745   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:30.636797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:30.681255   72122 cri.go:89] found id: ""
	I0910 19:02:30.681280   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.681295   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:30.681303   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:30.681361   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:30.715516   72122 cri.go:89] found id: ""
	I0910 19:02:30.715543   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.715551   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:30.715560   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:30.715572   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.768916   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:30.768948   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:30.783318   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:30.783348   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:30.852901   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:30.852925   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:30.852940   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:30.932276   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:30.932314   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.725716   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.223970   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:34.533148   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.533854   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.667516   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:35.164375   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:33.471931   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:33.486152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:33.486211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:33.524130   72122 cri.go:89] found id: ""
	I0910 19:02:33.524161   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.524173   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:33.524180   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:33.524238   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:33.562216   72122 cri.go:89] found id: ""
	I0910 19:02:33.562238   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.562247   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:33.562252   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:33.562305   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:33.596587   72122 cri.go:89] found id: ""
	I0910 19:02:33.596615   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.596626   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:33.596634   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:33.596692   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:33.633307   72122 cri.go:89] found id: ""
	I0910 19:02:33.633330   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.633338   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:33.633343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:33.633403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:33.667780   72122 cri.go:89] found id: ""
	I0910 19:02:33.667805   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.667815   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:33.667820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:33.667878   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:33.702406   72122 cri.go:89] found id: ""
	I0910 19:02:33.702436   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.702447   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:33.702456   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:33.702524   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:33.744544   72122 cri.go:89] found id: ""
	I0910 19:02:33.744574   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.744581   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:33.744587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:33.744661   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:33.782000   72122 cri.go:89] found id: ""
	I0910 19:02:33.782024   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.782032   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:33.782040   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:33.782053   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:33.858087   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:33.858115   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:33.858133   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:33.943238   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:33.943278   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.987776   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:33.987804   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:34.043197   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:34.043232   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.558122   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:36.571125   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:36.571195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:36.606195   72122 cri.go:89] found id: ""
	I0910 19:02:36.606228   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.606239   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:36.606246   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:36.606304   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:36.640248   72122 cri.go:89] found id: ""
	I0910 19:02:36.640290   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.640302   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:36.640310   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:36.640360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:36.676916   72122 cri.go:89] found id: ""
	I0910 19:02:36.676942   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.676952   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:36.676958   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:36.677013   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:36.713183   72122 cri.go:89] found id: ""
	I0910 19:02:36.713207   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.713218   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:36.713225   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:36.713283   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:36.750748   72122 cri.go:89] found id: ""
	I0910 19:02:36.750775   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.750782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:36.750787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:36.750847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:36.782614   72122 cri.go:89] found id: ""
	I0910 19:02:36.782636   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.782644   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:36.782650   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:36.782709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:36.822051   72122 cri.go:89] found id: ""
	I0910 19:02:36.822083   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.822094   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:36.822102   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:36.822161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:36.856068   72122 cri.go:89] found id: ""
	I0910 19:02:36.856096   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.856106   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:36.856117   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:36.856132   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:36.909586   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:36.909625   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.931649   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:36.931676   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:37.040146   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:37.040175   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:37.040194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:37.121902   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:37.121942   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:38.723762   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:40.723880   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:38.534001   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:41.035356   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:37.665212   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.668115   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.164118   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.665474   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:39.678573   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:39.678633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:39.712755   72122 cri.go:89] found id: ""
	I0910 19:02:39.712783   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.712793   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:39.712800   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:39.712857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:39.744709   72122 cri.go:89] found id: ""
	I0910 19:02:39.744738   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.744748   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:39.744756   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:39.744809   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:39.780161   72122 cri.go:89] found id: ""
	I0910 19:02:39.780189   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.780200   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:39.780207   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:39.780255   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:39.817665   72122 cri.go:89] found id: ""
	I0910 19:02:39.817695   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.817704   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:39.817709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:39.817757   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:39.857255   72122 cri.go:89] found id: ""
	I0910 19:02:39.857291   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.857299   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:39.857306   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:39.857381   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:39.893514   72122 cri.go:89] found id: ""
	I0910 19:02:39.893540   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.893550   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:39.893558   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:39.893614   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:39.932720   72122 cri.go:89] found id: ""
	I0910 19:02:39.932753   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.932767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:39.932775   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:39.932835   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:39.977063   72122 cri.go:89] found id: ""
	I0910 19:02:39.977121   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.977135   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:39.977146   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:39.977168   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:39.991414   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:39.991445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:40.066892   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:40.066910   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:40.066922   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:40.150648   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:40.150680   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:40.198519   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:40.198561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:42.724332   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.223804   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:43.533841   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.534665   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:44.164851   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:46.165259   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.749906   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:42.769633   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:42.769703   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:42.812576   72122 cri.go:89] found id: ""
	I0910 19:02:42.812603   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.812613   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:42.812620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:42.812682   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:42.846233   72122 cri.go:89] found id: ""
	I0910 19:02:42.846257   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.846266   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:42.846271   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:42.846326   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:42.883564   72122 cri.go:89] found id: ""
	I0910 19:02:42.883593   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.883605   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:42.883612   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:42.883669   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:42.920774   72122 cri.go:89] found id: ""
	I0910 19:02:42.920801   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.920813   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:42.920820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:42.920883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:42.953776   72122 cri.go:89] found id: ""
	I0910 19:02:42.953808   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.953820   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:42.953829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:42.953887   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:42.989770   72122 cri.go:89] found id: ""
	I0910 19:02:42.989806   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.989820   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:42.989829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:42.989893   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:43.022542   72122 cri.go:89] found id: ""
	I0910 19:02:43.022567   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.022574   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:43.022580   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:43.022629   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:43.064308   72122 cri.go:89] found id: ""
	I0910 19:02:43.064329   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.064337   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:43.064344   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:43.064356   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:43.120212   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:43.120243   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:43.134269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:43.134296   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:43.218840   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:43.218865   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:43.218880   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:43.302560   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:43.302591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:45.842788   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:45.857495   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:45.857557   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:45.892745   72122 cri.go:89] found id: ""
	I0910 19:02:45.892772   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.892782   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:45.892790   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:45.892888   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:45.928451   72122 cri.go:89] found id: ""
	I0910 19:02:45.928476   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.928486   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:45.928493   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:45.928551   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:45.962868   72122 cri.go:89] found id: ""
	I0910 19:02:45.962899   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.962910   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:45.962918   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:45.962979   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:45.996975   72122 cri.go:89] found id: ""
	I0910 19:02:45.997000   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.997009   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:45.997014   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:45.997065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:46.032271   72122 cri.go:89] found id: ""
	I0910 19:02:46.032299   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.032309   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:46.032317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:46.032375   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:46.072629   72122 cri.go:89] found id: ""
	I0910 19:02:46.072654   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.072662   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:46.072667   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:46.072713   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:46.112196   72122 cri.go:89] found id: ""
	I0910 19:02:46.112220   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.112228   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:46.112233   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:46.112298   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:46.155700   72122 cri.go:89] found id: ""
	I0910 19:02:46.155732   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.155745   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:46.155759   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:46.155794   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:46.210596   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:46.210624   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:46.224951   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:46.224980   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:46.294571   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:46.294597   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:46.294613   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:46.382431   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:46.382495   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:47.224808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:49.225392   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:51.227601   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.033643   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.535490   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.665543   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.666596   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.926582   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:48.941256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:48.941338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:48.979810   72122 cri.go:89] found id: ""
	I0910 19:02:48.979842   72122 logs.go:276] 0 containers: []
	W0910 19:02:48.979849   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:48.979856   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:48.979917   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:49.015083   72122 cri.go:89] found id: ""
	I0910 19:02:49.015126   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.015136   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:49.015144   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:49.015205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:49.052417   72122 cri.go:89] found id: ""
	I0910 19:02:49.052445   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.052453   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:49.052459   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:49.052511   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:49.092485   72122 cri.go:89] found id: ""
	I0910 19:02:49.092523   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.092533   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:49.092538   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:49.092588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:49.127850   72122 cri.go:89] found id: ""
	I0910 19:02:49.127882   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.127889   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:49.127897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:49.127952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:49.160693   72122 cri.go:89] found id: ""
	I0910 19:02:49.160724   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.160733   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:49.160740   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:49.160798   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:49.194713   72122 cri.go:89] found id: ""
	I0910 19:02:49.194737   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.194744   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:49.194750   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:49.194804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:49.229260   72122 cri.go:89] found id: ""
	I0910 19:02:49.229283   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.229292   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:49.229303   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:49.229320   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:49.281963   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:49.281992   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:49.294789   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:49.294809   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:49.366126   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:49.366152   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:49.366172   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:49.451187   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:49.451225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:51.990361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:52.003744   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:52.003807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:52.036794   72122 cri.go:89] found id: ""
	I0910 19:02:52.036824   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.036834   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:52.036840   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:52.036896   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:52.074590   72122 cri.go:89] found id: ""
	I0910 19:02:52.074613   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.074620   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:52.074625   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:52.074675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:52.119926   72122 cri.go:89] found id: ""
	I0910 19:02:52.119967   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.119981   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:52.119990   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:52.120075   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:52.157862   72122 cri.go:89] found id: ""
	I0910 19:02:52.157889   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.157900   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:52.157906   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:52.157968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:52.198645   72122 cri.go:89] found id: ""
	I0910 19:02:52.198675   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.198686   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:52.198693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:52.198756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:52.240091   72122 cri.go:89] found id: ""
	I0910 19:02:52.240113   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.240129   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:52.240139   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:52.240197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:52.275046   72122 cri.go:89] found id: ""
	I0910 19:02:52.275079   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.275090   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:52.275098   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:52.275179   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:52.311141   72122 cri.go:89] found id: ""
	I0910 19:02:52.311172   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.311184   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:52.311196   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:52.311211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:52.400004   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:52.400039   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:52.449043   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:52.449090   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:53.724151   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:56.223353   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.033328   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.035259   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.164639   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.165714   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:52.502304   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:52.502336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:52.518747   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:52.518772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:52.593581   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.094092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:55.108752   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:55.108830   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:55.143094   72122 cri.go:89] found id: ""
	I0910 19:02:55.143122   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.143133   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:55.143141   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:55.143198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:55.184298   72122 cri.go:89] found id: ""
	I0910 19:02:55.184326   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.184334   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:55.184340   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:55.184397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:55.216557   72122 cri.go:89] found id: ""
	I0910 19:02:55.216585   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.216596   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:55.216613   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:55.216676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:55.251049   72122 cri.go:89] found id: ""
	I0910 19:02:55.251075   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.251083   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:55.251090   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:55.251152   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:55.282689   72122 cri.go:89] found id: ""
	I0910 19:02:55.282716   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.282724   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:55.282729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:55.282800   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:55.316959   72122 cri.go:89] found id: ""
	I0910 19:02:55.316993   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.317004   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:55.317011   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:55.317085   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:55.353110   72122 cri.go:89] found id: ""
	I0910 19:02:55.353134   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.353143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:55.353149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:55.353205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:55.392391   72122 cri.go:89] found id: ""
	I0910 19:02:55.392422   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.392434   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:55.392446   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:55.392461   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:55.445431   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:55.445469   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:55.459348   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:55.459374   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:55.528934   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.528957   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:55.528973   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:55.610797   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:55.610833   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:58.223882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.223951   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.533754   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:59.535018   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:01.535255   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.667276   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.164510   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:58.152775   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:58.166383   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:58.166440   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:58.203198   72122 cri.go:89] found id: ""
	I0910 19:02:58.203225   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.203233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:58.203239   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:58.203284   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:58.240538   72122 cri.go:89] found id: ""
	I0910 19:02:58.240560   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.240567   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:58.240573   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:58.240633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:58.274802   72122 cri.go:89] found id: ""
	I0910 19:02:58.274826   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.274833   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:58.274839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:58.274886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:58.311823   72122 cri.go:89] found id: ""
	I0910 19:02:58.311857   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.311868   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:58.311876   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:58.311933   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:58.347260   72122 cri.go:89] found id: ""
	I0910 19:02:58.347281   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.347288   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:58.347294   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:58.347338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:58.382621   72122 cri.go:89] found id: ""
	I0910 19:02:58.382645   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.382655   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:58.382662   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:58.382720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:58.418572   72122 cri.go:89] found id: ""
	I0910 19:02:58.418597   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.418605   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:58.418611   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:58.418663   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:58.459955   72122 cri.go:89] found id: ""
	I0910 19:02:58.459987   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.459995   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:58.460003   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:58.460016   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:58.512831   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:58.512866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:58.527036   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:58.527067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:58.593329   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:58.593350   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:58.593366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:58.671171   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:58.671201   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.211905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:01.226567   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:01.226724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:01.261860   72122 cri.go:89] found id: ""
	I0910 19:03:01.261885   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.261893   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:01.261898   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:01.261946   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:01.294754   72122 cri.go:89] found id: ""
	I0910 19:03:01.294774   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.294781   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:01.294786   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:01.294833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:01.328378   72122 cri.go:89] found id: ""
	I0910 19:03:01.328403   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.328412   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:01.328417   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:01.328465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:01.363344   72122 cri.go:89] found id: ""
	I0910 19:03:01.363370   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.363380   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:01.363388   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:01.363446   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:01.398539   72122 cri.go:89] found id: ""
	I0910 19:03:01.398576   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.398586   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:01.398593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:01.398654   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:01.431367   72122 cri.go:89] found id: ""
	I0910 19:03:01.431390   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.431397   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:01.431403   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:01.431458   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:01.464562   72122 cri.go:89] found id: ""
	I0910 19:03:01.464589   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.464599   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:01.464606   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:01.464666   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:01.497493   72122 cri.go:89] found id: ""
	I0910 19:03:01.497520   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.497531   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:01.497540   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:01.497555   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:01.583083   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:01.583140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.624887   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:01.624919   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:01.676124   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:01.676155   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:01.690861   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:01.690894   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:01.763695   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:02.724017   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.725049   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.033371   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:06.033600   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:02.666137   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.669740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.164822   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.264867   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:04.279106   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:04.279176   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:04.315358   72122 cri.go:89] found id: ""
	I0910 19:03:04.315390   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.315398   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:04.315403   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:04.315457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:04.359466   72122 cri.go:89] found id: ""
	I0910 19:03:04.359489   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.359496   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:04.359504   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:04.359563   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:04.399504   72122 cri.go:89] found id: ""
	I0910 19:03:04.399529   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.399538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:04.399545   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:04.399604   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:04.438244   72122 cri.go:89] found id: ""
	I0910 19:03:04.438269   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.438277   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:04.438282   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:04.438340   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:04.475299   72122 cri.go:89] found id: ""
	I0910 19:03:04.475321   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.475329   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:04.475334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:04.475386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:04.516500   72122 cri.go:89] found id: ""
	I0910 19:03:04.516520   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.516529   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:04.516534   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:04.516588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:04.551191   72122 cri.go:89] found id: ""
	I0910 19:03:04.551214   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.551222   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:04.551228   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:04.551273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:04.585646   72122 cri.go:89] found id: ""
	I0910 19:03:04.585667   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.585675   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:04.585684   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:04.585699   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:04.598832   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:04.598858   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:04.670117   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:04.670140   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:04.670156   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:04.746592   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:04.746626   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:04.784061   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:04.784088   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.337082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:07.350696   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:07.350752   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:07.387344   72122 cri.go:89] found id: ""
	I0910 19:03:07.387373   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.387384   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:07.387391   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:07.387449   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:07.420468   72122 cri.go:89] found id: ""
	I0910 19:03:07.420490   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.420498   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:07.420503   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:07.420566   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:07.453746   72122 cri.go:89] found id: ""
	I0910 19:03:07.453773   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.453784   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:07.453791   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:07.453845   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:07.487359   72122 cri.go:89] found id: ""
	I0910 19:03:07.487388   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.487400   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:07.487407   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:07.487470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:07.223432   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.723164   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:08.033767   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:10.035613   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.165972   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:11.663740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.520803   72122 cri.go:89] found id: ""
	I0910 19:03:07.520827   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.520834   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:07.520839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:07.520898   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:07.556908   72122 cri.go:89] found id: ""
	I0910 19:03:07.556934   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.556945   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:07.556953   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:07.557017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:07.596072   72122 cri.go:89] found id: ""
	I0910 19:03:07.596093   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.596102   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:07.596107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:07.596165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:07.631591   72122 cri.go:89] found id: ""
	I0910 19:03:07.631620   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.631630   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:07.631639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:07.631661   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.683892   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:07.683923   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:07.697619   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:07.697645   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:07.766370   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:07.766397   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:07.766413   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:07.854102   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:07.854140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:10.400185   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:10.412771   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:10.412842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:10.447710   72122 cri.go:89] found id: ""
	I0910 19:03:10.447739   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.447750   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:10.447757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:10.447822   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:10.480865   72122 cri.go:89] found id: ""
	I0910 19:03:10.480892   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.480902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:10.480909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:10.480966   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:10.514893   72122 cri.go:89] found id: ""
	I0910 19:03:10.514919   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.514927   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:10.514933   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:10.514994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:10.556332   72122 cri.go:89] found id: ""
	I0910 19:03:10.556374   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.556385   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:10.556392   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:10.556457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:10.590529   72122 cri.go:89] found id: ""
	I0910 19:03:10.590562   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.590573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:10.590581   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:10.590642   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:10.623697   72122 cri.go:89] found id: ""
	I0910 19:03:10.623724   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.623732   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:10.623737   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:10.623788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:10.659236   72122 cri.go:89] found id: ""
	I0910 19:03:10.659259   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.659270   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:10.659277   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:10.659338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:10.693150   72122 cri.go:89] found id: ""
	I0910 19:03:10.693182   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.693192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:10.693202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:10.693217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:10.744624   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:10.744663   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:10.758797   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:10.758822   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:10.853796   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:10.853815   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:10.853827   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:10.937972   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:10.938019   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:11.724808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:14.224052   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:12.535134   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:15.033867   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:17.034507   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.667548   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:16.164483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.481898   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:13.495440   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:13.495505   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:13.531423   72122 cri.go:89] found id: ""
	I0910 19:03:13.531452   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.531463   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:13.531470   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:13.531532   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:13.571584   72122 cri.go:89] found id: ""
	I0910 19:03:13.571607   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.571615   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:13.571620   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:13.571674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:13.609670   72122 cri.go:89] found id: ""
	I0910 19:03:13.609695   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.609702   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:13.609707   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:13.609761   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:13.644726   72122 cri.go:89] found id: ""
	I0910 19:03:13.644755   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.644766   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:13.644773   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:13.644831   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:13.679692   72122 cri.go:89] found id: ""
	I0910 19:03:13.679722   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.679733   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:13.679741   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:13.679791   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:13.717148   72122 cri.go:89] found id: ""
	I0910 19:03:13.717177   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.717186   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:13.717192   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:13.717247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:13.755650   72122 cri.go:89] found id: ""
	I0910 19:03:13.755676   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.755688   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:13.755693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:13.755740   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:13.788129   72122 cri.go:89] found id: ""
	I0910 19:03:13.788158   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.788169   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:13.788179   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:13.788194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:13.865241   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:13.865277   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:13.909205   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:13.909233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:13.963495   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:13.963523   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:13.977311   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:13.977337   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:14.047015   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:16.547505   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:16.568333   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:16.568412   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:16.610705   72122 cri.go:89] found id: ""
	I0910 19:03:16.610734   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.610744   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:16.610752   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:16.610808   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:16.647307   72122 cri.go:89] found id: ""
	I0910 19:03:16.647333   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.647340   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:16.647345   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:16.647409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:16.684513   72122 cri.go:89] found id: ""
	I0910 19:03:16.684536   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.684544   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:16.684549   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:16.684602   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:16.718691   72122 cri.go:89] found id: ""
	I0910 19:03:16.718719   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.718729   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:16.718734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:16.718794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:16.753250   72122 cri.go:89] found id: ""
	I0910 19:03:16.753279   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.753291   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:16.753298   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:16.753358   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:16.788953   72122 cri.go:89] found id: ""
	I0910 19:03:16.788984   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.789001   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:16.789009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:16.789084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:16.823715   72122 cri.go:89] found id: ""
	I0910 19:03:16.823746   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.823760   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:16.823767   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:16.823837   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:16.858734   72122 cri.go:89] found id: ""
	I0910 19:03:16.858758   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.858770   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:16.858780   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:16.858795   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:16.897983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:16.898012   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:16.950981   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:16.951015   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:16.964809   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:16.964839   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:17.039142   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:17.039163   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:17.039177   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:16.724218   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.223909   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.533783   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:21.534203   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:18.164708   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:20.664302   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.619941   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:19.634432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:19.634489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:19.671220   72122 cri.go:89] found id: ""
	I0910 19:03:19.671246   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.671256   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:19.671264   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:19.671322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:19.704251   72122 cri.go:89] found id: ""
	I0910 19:03:19.704278   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.704294   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:19.704301   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:19.704347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:19.745366   72122 cri.go:89] found id: ""
	I0910 19:03:19.745393   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.745403   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:19.745410   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:19.745466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:19.781100   72122 cri.go:89] found id: ""
	I0910 19:03:19.781129   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.781136   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:19.781141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:19.781195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:19.817177   72122 cri.go:89] found id: ""
	I0910 19:03:19.817207   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.817219   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:19.817226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:19.817292   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:19.852798   72122 cri.go:89] found id: ""
	I0910 19:03:19.852829   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.852837   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:19.852842   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:19.852889   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:19.887173   72122 cri.go:89] found id: ""
	I0910 19:03:19.887200   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.887210   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:19.887219   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:19.887409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:19.922997   72122 cri.go:89] found id: ""
	I0910 19:03:19.923026   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.923038   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:19.923049   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:19.923063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:19.975703   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:19.975736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:19.989834   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:19.989866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:20.061312   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:20.061332   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:20.061344   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:20.143045   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:20.143080   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:21.723250   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:23.723771   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.724346   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:24.036790   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:26.533830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.664756   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.164650   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.681900   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:22.694860   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:22.694923   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:22.738529   72122 cri.go:89] found id: ""
	I0910 19:03:22.738553   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.738563   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:22.738570   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:22.738640   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:22.778102   72122 cri.go:89] found id: ""
	I0910 19:03:22.778132   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.778143   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:22.778150   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:22.778207   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:22.813273   72122 cri.go:89] found id: ""
	I0910 19:03:22.813307   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.813320   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:22.813334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:22.813397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:22.849613   72122 cri.go:89] found id: ""
	I0910 19:03:22.849637   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.849646   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:22.849651   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:22.849701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:22.883138   72122 cri.go:89] found id: ""
	I0910 19:03:22.883167   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.883178   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:22.883185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:22.883237   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:22.918521   72122 cri.go:89] found id: ""
	I0910 19:03:22.918550   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.918567   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:22.918574   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:22.918632   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:22.966657   72122 cri.go:89] found id: ""
	I0910 19:03:22.966684   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.966691   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:22.966701   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:22.966762   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:23.022254   72122 cri.go:89] found id: ""
	I0910 19:03:23.022282   72122 logs.go:276] 0 containers: []
	W0910 19:03:23.022290   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:23.022298   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:23.022309   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:23.082347   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:23.082386   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:23.096792   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:23.096814   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:23.172720   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:23.172740   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:23.172754   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:23.256155   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:23.256193   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:25.797211   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:25.810175   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:25.810234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:25.844848   72122 cri.go:89] found id: ""
	I0910 19:03:25.844876   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.844886   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:25.844901   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:25.844968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:25.877705   72122 cri.go:89] found id: ""
	I0910 19:03:25.877736   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.877747   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:25.877755   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:25.877807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:25.913210   72122 cri.go:89] found id: ""
	I0910 19:03:25.913238   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.913248   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:25.913256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:25.913316   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:25.947949   72122 cri.go:89] found id: ""
	I0910 19:03:25.947974   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.947984   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:25.947991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:25.948050   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:25.983487   72122 cri.go:89] found id: ""
	I0910 19:03:25.983511   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.983519   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:25.983524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:25.983573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:26.018176   72122 cri.go:89] found id: ""
	I0910 19:03:26.018201   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.018209   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:26.018214   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:26.018271   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:26.052063   72122 cri.go:89] found id: ""
	I0910 19:03:26.052087   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.052097   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:26.052104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:26.052165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:26.091919   72122 cri.go:89] found id: ""
	I0910 19:03:26.091949   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.091958   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:26.091968   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:26.091983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:26.146059   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:26.146094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:26.160529   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:26.160562   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:26.230742   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:26.230764   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:26.230778   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:26.313191   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:26.313222   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:27.724922   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:30.223811   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.039957   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:31.533256   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:27.665626   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.666857   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:32.165153   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:28.858457   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:28.873725   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:28.873788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:28.922685   72122 cri.go:89] found id: ""
	I0910 19:03:28.922717   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.922729   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:28.922737   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:28.922795   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:28.973236   72122 cri.go:89] found id: ""
	I0910 19:03:28.973260   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.973270   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:28.973277   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:28.973339   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:29.008999   72122 cri.go:89] found id: ""
	I0910 19:03:29.009049   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.009062   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:29.009081   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:29.009148   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:29.049009   72122 cri.go:89] found id: ""
	I0910 19:03:29.049037   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.049047   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:29.049056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:29.049131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:29.089543   72122 cri.go:89] found id: ""
	I0910 19:03:29.089564   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.089573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:29.089578   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:29.089648   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:29.126887   72122 cri.go:89] found id: ""
	I0910 19:03:29.126911   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.126918   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:29.126924   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:29.126969   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:29.161369   72122 cri.go:89] found id: ""
	I0910 19:03:29.161395   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.161405   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:29.161412   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:29.161474   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:29.199627   72122 cri.go:89] found id: ""
	I0910 19:03:29.199652   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.199661   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:29.199672   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:29.199691   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:29.268353   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:29.268386   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:29.268401   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:29.351470   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:29.351504   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:29.391768   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:29.391796   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:29.442705   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:29.442736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:31.957567   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:31.970218   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:31.970274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:32.004870   72122 cri.go:89] found id: ""
	I0910 19:03:32.004898   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.004908   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:32.004915   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:32.004971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:32.045291   72122 cri.go:89] found id: ""
	I0910 19:03:32.045322   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.045331   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:32.045337   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:32.045403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:32.085969   72122 cri.go:89] found id: ""
	I0910 19:03:32.085999   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.086007   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:32.086013   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:32.086067   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:32.120100   72122 cri.go:89] found id: ""
	I0910 19:03:32.120127   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.120135   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:32.120141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:32.120187   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:32.153977   72122 cri.go:89] found id: ""
	I0910 19:03:32.154004   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.154011   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:32.154016   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:32.154065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:32.195980   72122 cri.go:89] found id: ""
	I0910 19:03:32.196005   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.196013   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:32.196019   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:32.196068   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:32.233594   72122 cri.go:89] found id: ""
	I0910 19:03:32.233616   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.233623   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:32.233632   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:32.233677   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:32.268118   72122 cri.go:89] found id: ""
	I0910 19:03:32.268144   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.268152   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:32.268160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:32.268171   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:32.281389   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:32.281416   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:32.359267   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:32.359289   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:32.359304   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:32.445096   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:32.445137   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:32.483288   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:32.483325   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:32.224155   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.724191   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:33.537955   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.033801   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.663475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.665627   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:35.040393   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:35.053698   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:35.053750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:35.087712   72122 cri.go:89] found id: ""
	I0910 19:03:35.087742   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.087751   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:35.087757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:35.087802   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:35.125437   72122 cri.go:89] found id: ""
	I0910 19:03:35.125468   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.125482   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:35.125495   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:35.125562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:35.163885   72122 cri.go:89] found id: ""
	I0910 19:03:35.163914   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.163924   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:35.163931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:35.163989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:35.199426   72122 cri.go:89] found id: ""
	I0910 19:03:35.199459   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.199471   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:35.199479   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:35.199559   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:35.236388   72122 cri.go:89] found id: ""
	I0910 19:03:35.236408   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.236416   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:35.236421   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:35.236465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:35.274797   72122 cri.go:89] found id: ""
	I0910 19:03:35.274817   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.274825   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:35.274830   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:35.274874   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:35.308127   72122 cri.go:89] found id: ""
	I0910 19:03:35.308155   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.308166   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:35.308173   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:35.308230   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:35.340675   72122 cri.go:89] found id: ""
	I0910 19:03:35.340697   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.340704   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:35.340712   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:35.340727   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:35.390806   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:35.390842   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:35.404427   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:35.404458   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:35.471526   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:35.471560   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:35.471575   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:35.547469   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:35.547497   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:37.223464   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:39.224137   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.534280   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:40.534728   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.666077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.165483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.087127   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:38.100195   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:38.100251   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:38.135386   72122 cri.go:89] found id: ""
	I0910 19:03:38.135408   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.135416   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:38.135422   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:38.135480   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:38.168531   72122 cri.go:89] found id: ""
	I0910 19:03:38.168558   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.168568   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:38.168577   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:38.168639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:38.202931   72122 cri.go:89] found id: ""
	I0910 19:03:38.202958   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.202968   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:38.202974   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:38.203030   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:38.239185   72122 cri.go:89] found id: ""
	I0910 19:03:38.239209   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.239219   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:38.239226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:38.239279   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:38.276927   72122 cri.go:89] found id: ""
	I0910 19:03:38.276952   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.276961   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:38.276967   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:38.277035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:38.311923   72122 cri.go:89] found id: ""
	I0910 19:03:38.311951   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.311962   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:38.311971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:38.312034   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:38.344981   72122 cri.go:89] found id: ""
	I0910 19:03:38.345012   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.345023   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:38.345030   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:38.345099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:38.378012   72122 cri.go:89] found id: ""
	I0910 19:03:38.378037   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.378048   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:38.378058   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:38.378076   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:38.449361   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:38.449384   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:38.449396   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:38.530683   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:38.530713   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:38.570047   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:38.570073   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:38.620143   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:38.620176   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.134152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:41.148416   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:41.148509   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:41.186681   72122 cri.go:89] found id: ""
	I0910 19:03:41.186706   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.186713   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:41.186719   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:41.186767   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:41.221733   72122 cri.go:89] found id: ""
	I0910 19:03:41.221758   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.221769   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:41.221776   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:41.221834   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:41.256099   72122 cri.go:89] found id: ""
	I0910 19:03:41.256125   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.256136   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:41.256143   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:41.256194   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:41.289825   72122 cri.go:89] found id: ""
	I0910 19:03:41.289850   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.289860   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:41.289867   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:41.289926   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:41.323551   72122 cri.go:89] found id: ""
	I0910 19:03:41.323581   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.323594   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:41.323601   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:41.323659   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:41.356508   72122 cri.go:89] found id: ""
	I0910 19:03:41.356535   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.356546   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:41.356553   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:41.356608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:41.391556   72122 cri.go:89] found id: ""
	I0910 19:03:41.391579   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.391586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:41.391592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:41.391651   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:41.427685   72122 cri.go:89] found id: ""
	I0910 19:03:41.427711   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.427726   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:41.427743   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:41.427758   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:41.481970   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:41.482001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.495266   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:41.495290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:41.568334   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:41.568357   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:41.568370   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:41.650178   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:41.650211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:43.724494   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:46.223803   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.034100   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.035091   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.167877   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.664633   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:44.193665   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:44.209118   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:44.209197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:44.245792   72122 cri.go:89] found id: ""
	I0910 19:03:44.245819   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.245829   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:44.245834   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:44.245900   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:44.285673   72122 cri.go:89] found id: ""
	I0910 19:03:44.285699   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.285711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:44.285719   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:44.285787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:44.326471   72122 cri.go:89] found id: ""
	I0910 19:03:44.326495   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.326505   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:44.326520   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:44.326589   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:44.367864   72122 cri.go:89] found id: ""
	I0910 19:03:44.367890   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.367898   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:44.367907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:44.367954   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:44.407161   72122 cri.go:89] found id: ""
	I0910 19:03:44.407185   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.407193   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:44.407198   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:44.407256   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:44.446603   72122 cri.go:89] found id: ""
	I0910 19:03:44.446628   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.446638   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:44.446645   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:44.446705   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:44.486502   72122 cri.go:89] found id: ""
	I0910 19:03:44.486526   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.486536   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:44.486543   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:44.486605   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:44.524992   72122 cri.go:89] found id: ""
	I0910 19:03:44.525017   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.525025   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:44.525033   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:44.525044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:44.579387   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:44.579418   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:44.594045   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:44.594070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:44.678857   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:44.678883   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:44.678897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:44.763799   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:44.763830   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:47.305631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:47.319275   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:47.319347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:47.359199   72122 cri.go:89] found id: ""
	I0910 19:03:47.359222   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.359233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:47.359240   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:47.359300   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:47.397579   72122 cri.go:89] found id: ""
	I0910 19:03:47.397602   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.397610   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:47.397616   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:47.397674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:47.431114   72122 cri.go:89] found id: ""
	I0910 19:03:47.431138   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.431146   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:47.431151   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:47.431205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:47.470475   72122 cri.go:89] found id: ""
	I0910 19:03:47.470499   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.470509   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:47.470515   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:47.470573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:48.227625   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.725421   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.534967   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:49.027864   71529 pod_ready.go:82] duration metric: took 4m0.000448579s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:49.027890   71529 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0910 19:03:49.027905   71529 pod_ready.go:39] duration metric: took 4m14.536052937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:49.027929   71529 kubeadm.go:597] duration metric: took 4m22.283340761s to restartPrimaryControlPlane
	W0910 19:03:49.027982   71529 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:03:49.028009   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:03:47.668029   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.164077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.504484   72122 cri.go:89] found id: ""
	I0910 19:03:47.504509   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.504518   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:47.504524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:47.504577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:47.541633   72122 cri.go:89] found id: ""
	I0910 19:03:47.541651   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.541658   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:47.541663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:47.541706   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:47.579025   72122 cri.go:89] found id: ""
	I0910 19:03:47.579051   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.579060   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:47.579068   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:47.579123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:47.612333   72122 cri.go:89] found id: ""
	I0910 19:03:47.612359   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.612370   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:47.612380   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:47.612395   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:47.667214   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:47.667242   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:47.683425   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:47.683466   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:47.749510   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:47.749531   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:47.749543   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:47.830454   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:47.830487   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:50.373207   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:50.387191   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:50.387247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:50.422445   72122 cri.go:89] found id: ""
	I0910 19:03:50.422476   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.422488   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:50.422495   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:50.422562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:50.456123   72122 cri.go:89] found id: ""
	I0910 19:03:50.456145   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.456153   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:50.456157   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:50.456211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:50.488632   72122 cri.go:89] found id: ""
	I0910 19:03:50.488661   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.488672   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:50.488680   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:50.488736   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:50.523603   72122 cri.go:89] found id: ""
	I0910 19:03:50.523628   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.523636   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:50.523641   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:50.523699   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:50.559741   72122 cri.go:89] found id: ""
	I0910 19:03:50.559765   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.559773   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:50.559778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:50.559842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:50.595387   72122 cri.go:89] found id: ""
	I0910 19:03:50.595406   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.595414   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:50.595420   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:50.595472   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:50.628720   72122 cri.go:89] found id: ""
	I0910 19:03:50.628747   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.628767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:50.628774   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:50.628833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:50.660635   72122 cri.go:89] found id: ""
	I0910 19:03:50.660655   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.660663   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:50.660671   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:50.660683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:50.716517   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:50.716544   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:50.731411   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:50.731443   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:50.799252   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:50.799275   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:50.799290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:50.874490   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:50.874524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.222989   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225335   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225365   71627 pod_ready.go:82] duration metric: took 4m0.007907353s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:55.225523   71627 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:03:55.225534   71627 pod_ready.go:39] duration metric: took 4m2.40870138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:55.225551   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:03:55.225579   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:55.225629   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:55.270742   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:55.270761   71627 cri.go:89] found id: ""
	I0910 19:03:55.270768   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:55.270811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.276233   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:55.276283   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:55.316033   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:55.316051   71627 cri.go:89] found id: ""
	I0910 19:03:55.316058   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:55.316103   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.320441   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:55.320494   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:55.354406   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.354428   71627 cri.go:89] found id: ""
	I0910 19:03:55.354435   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:55.354482   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.358553   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:55.358621   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:55.393871   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.393896   71627 cri.go:89] found id: ""
	I0910 19:03:55.393904   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:55.393959   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.398102   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:55.398154   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:55.432605   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.432625   71627 cri.go:89] found id: ""
	I0910 19:03:55.432632   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:55.432686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.437631   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:55.437689   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:55.474250   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.474277   71627 cri.go:89] found id: ""
	I0910 19:03:55.474287   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:55.474352   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.479177   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:55.479235   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:55.514918   71627 cri.go:89] found id: ""
	I0910 19:03:55.514942   71627 logs.go:276] 0 containers: []
	W0910 19:03:55.514951   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:55.514956   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:55.515014   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:55.549310   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.549330   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.549335   71627 cri.go:89] found id: ""
	I0910 19:03:55.549347   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:55.549404   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.553420   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.557502   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:55.557531   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.592661   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:55.592685   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.629876   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:55.629908   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.668935   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:55.668963   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:55.685881   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:55.685906   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:55.815552   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:55.815578   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.854615   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:55.854640   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.906027   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:55.906069   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.943771   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:55.943808   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:52.666368   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.165213   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:53.417835   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:53.430627   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:53.430694   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:53.469953   72122 cri.go:89] found id: ""
	I0910 19:03:53.469981   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.469992   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:53.469999   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:53.470060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:53.503712   72122 cri.go:89] found id: ""
	I0910 19:03:53.503739   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.503750   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:53.503757   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:53.503814   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:53.539875   72122 cri.go:89] found id: ""
	I0910 19:03:53.539895   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.539902   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:53.539907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:53.539952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:53.575040   72122 cri.go:89] found id: ""
	I0910 19:03:53.575067   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.575078   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:53.575085   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:53.575159   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:53.611171   72122 cri.go:89] found id: ""
	I0910 19:03:53.611193   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.611201   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:53.611206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:53.611253   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:53.644467   72122 cri.go:89] found id: ""
	I0910 19:03:53.644494   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.644505   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:53.644513   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:53.644575   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:53.680886   72122 cri.go:89] found id: ""
	I0910 19:03:53.680913   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.680924   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:53.680931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:53.680989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:53.716834   72122 cri.go:89] found id: ""
	I0910 19:03:53.716863   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.716875   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:53.716885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:53.716900   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.755544   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:53.755568   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:53.807382   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:53.807411   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:53.820289   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:53.820311   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:53.891500   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:53.891524   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:53.891540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:56.472368   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:56.491939   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:56.492020   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:56.535575   72122 cri.go:89] found id: ""
	I0910 19:03:56.535603   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.535614   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:56.535620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:56.535672   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:56.570366   72122 cri.go:89] found id: ""
	I0910 19:03:56.570390   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.570398   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:56.570403   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:56.570452   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:56.609486   72122 cri.go:89] found id: ""
	I0910 19:03:56.609524   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.609535   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:56.609542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:56.609608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:56.650268   72122 cri.go:89] found id: ""
	I0910 19:03:56.650295   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.650305   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:56.650312   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:56.650371   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:56.689113   72122 cri.go:89] found id: ""
	I0910 19:03:56.689139   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.689146   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:56.689154   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:56.689214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:56.721546   72122 cri.go:89] found id: ""
	I0910 19:03:56.721568   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.721576   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:56.721582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:56.721639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:56.753149   72122 cri.go:89] found id: ""
	I0910 19:03:56.753171   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.753179   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:56.753185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:56.753233   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:56.786624   72122 cri.go:89] found id: ""
	I0910 19:03:56.786648   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.786658   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:56.786669   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.786683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.840243   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:56.840276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:56.854453   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:56.854475   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:56.928814   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:56.928835   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:56.928849   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:57.012360   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:57.012403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.443638   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:03:56.443684   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.498856   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.498897   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.573520   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:56.573548   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:56.621270   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:56.621301   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.173747   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.190441   71627 api_server.go:72] duration metric: took 4m14.110101643s to wait for apiserver process to appear ...
	I0910 19:03:59.190463   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:03:59.190495   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.190539   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.224716   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.224744   71627 cri.go:89] found id: ""
	I0910 19:03:59.224753   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:59.224811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.229345   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.229412   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.263589   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.263622   71627 cri.go:89] found id: ""
	I0910 19:03:59.263630   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:59.263686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.269664   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.269728   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.312201   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.312224   71627 cri.go:89] found id: ""
	I0910 19:03:59.312233   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:59.312288   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.317991   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.318067   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.360625   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.360650   71627 cri.go:89] found id: ""
	I0910 19:03:59.360657   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:59.360707   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.364948   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.365010   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.404075   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.404096   71627 cri.go:89] found id: ""
	I0910 19:03:59.404103   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:59.404149   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.408098   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.408141   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.443767   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.443792   71627 cri.go:89] found id: ""
	I0910 19:03:59.443802   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:59.443858   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.448348   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.448397   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.485373   71627 cri.go:89] found id: ""
	I0910 19:03:59.485401   71627 logs.go:276] 0 containers: []
	W0910 19:03:59.485409   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.485414   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:59.485470   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:59.522641   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.522660   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.522664   71627 cri.go:89] found id: ""
	I0910 19:03:59.522671   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:59.522726   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.527283   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.531256   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:59.531275   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.576358   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:59.576382   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.625938   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:59.625974   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.664362   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:59.664386   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.718655   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:59.718686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.763954   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.763984   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.785217   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:59.785248   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.836560   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:59.836604   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.878973   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:59.879001   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.929851   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:59.929878   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.400346   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.400384   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:00.442281   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:00.442307   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:00.510448   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:00.510480   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:57.665980   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.666054   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:01.668052   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.558561   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.572993   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.573094   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.618957   72122 cri.go:89] found id: ""
	I0910 19:03:59.618988   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.618999   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:59.619008   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.619072   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.662544   72122 cri.go:89] found id: ""
	I0910 19:03:59.662643   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.662661   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:59.662673   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.662750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.704323   72122 cri.go:89] found id: ""
	I0910 19:03:59.704349   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.704360   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:59.704367   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.704426   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.738275   72122 cri.go:89] found id: ""
	I0910 19:03:59.738301   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.738311   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:59.738317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.738367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.778887   72122 cri.go:89] found id: ""
	I0910 19:03:59.778922   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.778934   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:59.778944   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.779010   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.814953   72122 cri.go:89] found id: ""
	I0910 19:03:59.814985   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.814995   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:59.815003   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.815064   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.850016   72122 cri.go:89] found id: ""
	I0910 19:03:59.850048   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.850061   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.850069   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:59.850131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:59.887546   72122 cri.go:89] found id: ""
	I0910 19:03:59.887589   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.887600   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:59.887613   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:59.887632   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:59.938761   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.938784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.954572   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:59.954603   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:04:00.029593   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:04:00.029622   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:00.029638   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.121427   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.121462   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:02.660924   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:02.674661   72122 kubeadm.go:597] duration metric: took 4m3.166175956s to restartPrimaryControlPlane
	W0910 19:04:02.674744   72122 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:04:02.674769   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:04:03.133507   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:03.150426   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:03.161678   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:03.173362   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:03.173389   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:03.173436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:03.183872   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:03.183934   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:03.193891   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:03.203385   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:03.203450   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:03.216255   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.227938   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:03.228001   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.240799   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:03.252871   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:03.252922   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:03.263682   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:03.337478   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:04:03.337564   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:03.506276   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:03.506454   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:03.506587   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:04:03.697062   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:03.698908   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:03.699004   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:03.699083   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:03.699184   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:03.699270   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:03.699361   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:03.699517   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:03.700040   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:03.700773   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:03.701529   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:03.702334   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:03.702627   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:03.702715   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:03.929760   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:03.992724   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:04.087552   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:04.226550   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:04.244695   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:04.246125   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:04.246187   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:04.396099   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:03.107779   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 19:04:03.112394   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 19:04:03.113347   71627 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:03.113367   71627 api_server.go:131] duration metric: took 3.922898577s to wait for apiserver health ...
	I0910 19:04:03.113375   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:03.113399   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:03.113443   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:03.153182   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.153204   71627 cri.go:89] found id: ""
	I0910 19:04:03.153213   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:04:03.153263   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.157842   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:03.157906   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:03.199572   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:03.199594   71627 cri.go:89] found id: ""
	I0910 19:04:03.199604   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:04:03.199658   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.204332   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:03.204409   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:03.252660   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.252686   71627 cri.go:89] found id: ""
	I0910 19:04:03.252696   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:04:03.252751   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.257850   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:03.257915   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:03.300208   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:03.300226   71627 cri.go:89] found id: ""
	I0910 19:04:03.300235   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:04:03.300294   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.304875   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:03.304953   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:03.346705   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.346734   71627 cri.go:89] found id: ""
	I0910 19:04:03.346744   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:04:03.346807   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.351246   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:03.351314   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:03.391218   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.391240   71627 cri.go:89] found id: ""
	I0910 19:04:03.391247   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:04:03.391290   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.396156   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:03.396264   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:03.437436   71627 cri.go:89] found id: ""
	I0910 19:04:03.437464   71627 logs.go:276] 0 containers: []
	W0910 19:04:03.437473   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:03.437479   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:03.437551   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:03.476396   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.476417   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.476420   71627 cri.go:89] found id: ""
	I0910 19:04:03.476427   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:04:03.476481   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.480969   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.485821   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:03.485843   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:03.537042   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:04:03.537079   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.599059   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:04:03.599102   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.637541   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:04:03.637576   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.682203   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:04:03.682234   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.734965   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:04:03.734992   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.769711   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:04:03.769738   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.805970   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:03.805999   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:04.165756   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:04.165796   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:04.254572   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:04.254609   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:04.272637   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:04.272686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:04.421716   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:04:04.421756   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:04.476657   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:04:04.476701   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:07.038592   71627 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:07.038618   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.038624   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.038628   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.038632   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.038636   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.038639   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.038644   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.038651   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.038658   71627 system_pods.go:74] duration metric: took 3.925277367s to wait for pod list to return data ...
	I0910 19:04:07.038667   71627 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:07.040831   71627 default_sa.go:45] found service account: "default"
	I0910 19:04:07.040854   71627 default_sa.go:55] duration metric: took 2.180848ms for default service account to be created ...
	I0910 19:04:07.040864   71627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:07.045130   71627 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:07.045151   71627 system_pods.go:89] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.045157   71627 system_pods.go:89] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.045162   71627 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.045167   71627 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.045171   71627 system_pods.go:89] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.045175   71627 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.045180   71627 system_pods.go:89] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.045184   71627 system_pods.go:89] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.045191   71627 system_pods.go:126] duration metric: took 4.321406ms to wait for k8s-apps to be running ...
	I0910 19:04:07.045200   71627 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:07.045242   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:07.061292   71627 system_svc.go:56] duration metric: took 16.084643ms WaitForService to wait for kubelet
	I0910 19:04:07.061318   71627 kubeadm.go:582] duration metric: took 4m21.980981405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:07.061342   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:07.064260   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:07.064277   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:07.064288   71627 node_conditions.go:105] duration metric: took 2.940712ms to run NodePressure ...
	I0910 19:04:07.064298   71627 start.go:241] waiting for startup goroutines ...
	I0910 19:04:07.064308   71627 start.go:246] waiting for cluster config update ...
	I0910 19:04:07.064318   71627 start.go:255] writing updated cluster config ...
	I0910 19:04:07.064627   71627 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:07.109814   71627 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:07.111804   71627 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-557504" cluster and "default" namespace by default
	I0910 19:04:04.165083   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:06.663618   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:04.397627   72122 out.go:235]   - Booting up control plane ...
	I0910 19:04:04.397763   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:04.405199   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:04.407281   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:04.408182   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:04.411438   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:04:08.667046   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:11.164622   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.461731   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.433698154s)
	I0910 19:04:15.461801   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:15.483515   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:15.497133   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:15.513903   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:15.513924   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:15.513972   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:15.524468   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:15.524529   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:15.534726   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:15.544892   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:15.544944   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:15.554663   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.564884   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:15.564978   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.574280   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:15.583882   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:15.583932   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:15.593971   71529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:15.639220   71529 kubeadm.go:310] W0910 19:04:15.612221    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.641412   71529 kubeadm.go:310] W0910 19:04:15.614470    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.749471   71529 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:04:13.164865   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.165232   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:17.664384   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:19.664943   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:22.166309   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:24.300945   71529 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 19:04:24.301016   71529 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:24.301143   71529 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:24.301274   71529 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:24.301408   71529 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 19:04:24.301517   71529 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:24.302988   71529 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:24.303079   71529 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:24.303132   71529 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:24.303197   71529 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:24.303252   71529 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:24.303315   71529 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:24.303367   71529 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:24.303443   71529 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:24.303517   71529 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:24.303631   71529 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:24.303737   71529 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:24.303792   71529 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:24.303873   71529 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:24.303954   71529 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:24.304037   71529 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:04:24.304120   71529 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:24.304217   71529 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:24.304299   71529 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:24.304423   71529 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:24.304523   71529 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:24.305839   71529 out.go:235]   - Booting up control plane ...
	I0910 19:04:24.305946   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:24.306046   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:24.306123   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:24.306254   71529 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:24.306338   71529 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:24.306387   71529 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:24.306507   71529 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 19:04:24.306608   71529 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:04:24.306679   71529 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.526264ms
	I0910 19:04:24.306748   71529 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 19:04:24.306801   71529 kubeadm.go:310] [api-check] The API server is healthy after 5.501960865s
	I0910 19:04:24.306887   71529 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 19:04:24.306997   71529 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 19:04:24.307045   71529 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 19:04:24.307202   71529 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-347802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 19:04:24.307250   71529 kubeadm.go:310] [bootstrap-token] Using token: 3uw8fx.h3bliquui6tuj5mh
	I0910 19:04:24.308589   71529 out.go:235]   - Configuring RBAC rules ...
	I0910 19:04:24.308728   71529 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 19:04:24.308847   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 19:04:24.309021   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 19:04:24.309197   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 19:04:24.309330   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 19:04:24.309437   71529 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 19:04:24.309612   71529 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 19:04:24.309681   71529 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 19:04:24.309776   71529 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 19:04:24.309787   71529 kubeadm.go:310] 
	I0910 19:04:24.309865   71529 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 19:04:24.309874   71529 kubeadm.go:310] 
	I0910 19:04:24.309951   71529 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 19:04:24.309963   71529 kubeadm.go:310] 
	I0910 19:04:24.309984   71529 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 19:04:24.310033   71529 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 19:04:24.310085   71529 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 19:04:24.310091   71529 kubeadm.go:310] 
	I0910 19:04:24.310152   71529 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 19:04:24.310164   71529 kubeadm.go:310] 
	I0910 19:04:24.310203   71529 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 19:04:24.310214   71529 kubeadm.go:310] 
	I0910 19:04:24.310262   71529 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 19:04:24.310326   71529 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 19:04:24.310383   71529 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 19:04:24.310390   71529 kubeadm.go:310] 
	I0910 19:04:24.310457   71529 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 19:04:24.310525   71529 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 19:04:24.310531   71529 kubeadm.go:310] 
	I0910 19:04:24.310598   71529 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310705   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 19:04:24.310728   71529 kubeadm.go:310] 	--control-plane 
	I0910 19:04:24.310731   71529 kubeadm.go:310] 
	I0910 19:04:24.310806   71529 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 19:04:24.310814   71529 kubeadm.go:310] 
	I0910 19:04:24.310884   71529 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310978   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 19:04:24.310994   71529 cni.go:84] Creating CNI manager for ""
	I0910 19:04:24.311006   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:04:24.312411   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:04:24.313516   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:04:24.326066   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:04:24.346367   71529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:04:24.346446   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.346475   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-347802 minikube.k8s.io/updated_at=2024_09_10T19_04_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=no-preload-347802 minikube.k8s.io/primary=true
	I0910 19:04:24.374396   71529 ops.go:34] apiserver oom_adj: -16
	I0910 19:04:24.561164   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.061938   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.561435   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.062175   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.561899   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:27.061256   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.664345   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:26.666316   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:27.561862   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.061889   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.562200   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.732352   71529 kubeadm.go:1113] duration metric: took 4.385961888s to wait for elevateKubeSystemPrivileges
	I0910 19:04:28.732387   71529 kubeadm.go:394] duration metric: took 5m2.035769941s to StartCluster
	I0910 19:04:28.732410   71529 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.732497   71529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:04:28.735625   71529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.735909   71529 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:04:28.736234   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:04:28.736296   71529 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:04:28.736417   71529 addons.go:69] Setting storage-provisioner=true in profile "no-preload-347802"
	I0910 19:04:28.736445   71529 addons.go:234] Setting addon storage-provisioner=true in "no-preload-347802"
	W0910 19:04:28.736453   71529 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:04:28.736480   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.736667   71529 addons.go:69] Setting default-storageclass=true in profile "no-preload-347802"
	I0910 19:04:28.736674   71529 addons.go:69] Setting metrics-server=true in profile "no-preload-347802"
	I0910 19:04:28.736703   71529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-347802"
	I0910 19:04:28.736717   71529 addons.go:234] Setting addon metrics-server=true in "no-preload-347802"
	W0910 19:04:28.736727   71529 addons.go:243] addon metrics-server should already be in state true
	I0910 19:04:28.736758   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.737346   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737360   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737401   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737709   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737809   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737832   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737891   71529 out.go:177] * Verifying Kubernetes components...
	I0910 19:04:28.739122   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:04:28.755720   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0910 19:04:28.755754   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0910 19:04:28.756110   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0910 19:04:28.756297   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756298   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756688   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756870   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.756891   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757053   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757092   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757426   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757451   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.757637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.757759   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.758328   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.758368   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.760809   71529 addons.go:234] Setting addon default-storageclass=true in "no-preload-347802"
	W0910 19:04:28.760825   71529 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:04:28.760848   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.761254   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.761285   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.761486   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.761994   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.762024   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.775766   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0910 19:04:28.776199   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.776801   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.776824   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.777167   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.777359   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.777651   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0910 19:04:28.778091   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.778678   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.778696   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.779019   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.779215   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.779616   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.780231   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0910 19:04:28.780605   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.780675   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.781156   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.781183   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.781330   71529 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:04:28.781416   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.781810   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.781841   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.782326   71529 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:04:28.782391   71529 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:28.782408   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:04:28.782425   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.783393   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:04:28.783413   71529 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:04:28.783433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.785287   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785763   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.785792   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785948   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.786114   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.786250   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.786397   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.786768   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787101   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.787124   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787330   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.787492   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.787637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.787747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.802599   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0910 19:04:28.802947   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.803402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.803415   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.803711   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.803882   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.805296   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.805498   71529 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:28.805510   71529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:04:28.805523   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.808615   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809041   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.809056   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809333   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.809518   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.809687   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.809792   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.974399   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:04:29.068531   71529 node_ready.go:35] waiting up to 6m0s for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084281   71529 node_ready.go:49] node "no-preload-347802" has status "Ready":"True"
	I0910 19:04:29.084306   71529 node_ready.go:38] duration metric: took 15.737646ms for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084317   71529 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:29.098794   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:29.122272   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:29.132813   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:29.191758   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:04:29.191777   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:04:29.224998   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:04:29.225019   71529 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:04:29.264455   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:29.264489   71529 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:04:29.369504   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:30.199702   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.066859027s)
	I0910 19:04:30.199757   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199769   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.199850   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077541595s)
	I0910 19:04:30.199895   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199909   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200096   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200135   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200147   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200155   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200154   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200174   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200187   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200201   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200209   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200220   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200387   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200402   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200617   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200655   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200680   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.219416   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.219437   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.219697   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.219705   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.219741   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.366927   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.366957   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367264   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367279   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367288   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.367302   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367506   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367520   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367533   71529 addons.go:475] Verifying addon metrics-server=true in "no-preload-347802"
	I0910 19:04:30.369968   71529 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:04:30.371186   71529 addons.go:510] duration metric: took 1.634894777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:04:31.104506   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:29.164993   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:31.668683   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:33.105761   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:35.606200   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:34.164783   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:36.663840   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:38.106188   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:39.106175   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.106199   71529 pod_ready.go:82] duration metric: took 10.007378894s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.106210   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111333   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.111352   71529 pod_ready.go:82] duration metric: took 5.13344ms for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111362   71529 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116673   71529 pod_ready.go:93] pod "etcd-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.116689   71529 pod_ready.go:82] duration metric: took 5.319986ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116697   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125400   71529 pod_ready.go:93] pod "kube-apiserver-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.125422   71529 pod_ready.go:82] duration metric: took 8.717835ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125433   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133790   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.133807   71529 pod_ready.go:82] duration metric: took 8.36626ms for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133818   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504642   71529 pod_ready.go:93] pod "kube-proxy-gwzhs" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.504665   71529 pod_ready.go:82] duration metric: took 370.840119ms for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504675   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903625   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.903646   71529 pod_ready.go:82] duration metric: took 398.964651ms for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903653   71529 pod_ready.go:39] duration metric: took 10.819325885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:39.903666   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:39.903710   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:39.918479   71529 api_server.go:72] duration metric: took 11.182520681s to wait for apiserver process to appear ...
	I0910 19:04:39.918501   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:39.918521   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 19:04:39.922745   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 19:04:39.923681   71529 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:39.923701   71529 api_server.go:131] duration metric: took 5.193102ms to wait for apiserver health ...
	I0910 19:04:39.923710   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:40.106587   71529 system_pods.go:59] 9 kube-system pods found
	I0910 19:04:40.106614   71529 system_pods.go:61] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.106619   71529 system_pods.go:61] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.106623   71529 system_pods.go:61] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.106626   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.106630   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.106633   71529 system_pods.go:61] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.106637   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.106642   71529 system_pods.go:61] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.106646   71529 system_pods.go:61] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.106652   71529 system_pods.go:74] duration metric: took 182.93737ms to wait for pod list to return data ...
	I0910 19:04:40.106662   71529 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:40.303294   71529 default_sa.go:45] found service account: "default"
	I0910 19:04:40.303316   71529 default_sa.go:55] duration metric: took 196.649242ms for default service account to be created ...
	I0910 19:04:40.303324   71529 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:40.506862   71529 system_pods.go:86] 9 kube-system pods found
	I0910 19:04:40.506894   71529 system_pods.go:89] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.506902   71529 system_pods.go:89] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.506908   71529 system_pods.go:89] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.506913   71529 system_pods.go:89] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.506919   71529 system_pods.go:89] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.506925   71529 system_pods.go:89] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.506931   71529 system_pods.go:89] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.506940   71529 system_pods.go:89] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.506949   71529 system_pods.go:89] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.506963   71529 system_pods.go:126] duration metric: took 203.633111ms to wait for k8s-apps to be running ...
	I0910 19:04:40.506974   71529 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:40.507032   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:40.522711   71529 system_svc.go:56] duration metric: took 15.728044ms WaitForService to wait for kubelet
	I0910 19:04:40.522739   71529 kubeadm.go:582] duration metric: took 11.786784927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:40.522761   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:40.702993   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:40.703011   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:40.703020   71529 node_conditions.go:105] duration metric: took 180.253729ms to run NodePressure ...
	I0910 19:04:40.703031   71529 start.go:241] waiting for startup goroutines ...
	I0910 19:04:40.703037   71529 start.go:246] waiting for cluster config update ...
	I0910 19:04:40.703046   71529 start.go:255] writing updated cluster config ...
	I0910 19:04:40.703329   71529 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:40.750434   71529 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:40.752453   71529 out.go:177] * Done! kubectl is now configured to use "no-preload-347802" cluster and "default" namespace by default
	I0910 19:04:37.670616   71183 pod_ready.go:82] duration metric: took 4m0.012645309s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:04:37.670637   71183 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:04:37.670644   71183 pod_ready.go:39] duration metric: took 4m3.614436373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:37.670658   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:37.670693   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:37.670746   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:37.721269   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:37.721295   71183 cri.go:89] found id: ""
	I0910 19:04:37.721303   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:37.721361   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.725648   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:37.725711   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:37.760937   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:37.760967   71183 cri.go:89] found id: ""
	I0910 19:04:37.760978   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:37.761034   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.765181   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:37.765243   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:37.800419   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:37.800447   71183 cri.go:89] found id: ""
	I0910 19:04:37.800457   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:37.800509   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.805255   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:37.805330   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:37.849032   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:37.849055   71183 cri.go:89] found id: ""
	I0910 19:04:37.849064   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:37.849136   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.853148   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:37.853224   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:37.888327   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:37.888352   71183 cri.go:89] found id: ""
	I0910 19:04:37.888361   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:37.888417   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.892721   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:37.892782   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:37.928648   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:37.928671   71183 cri.go:89] found id: ""
	I0910 19:04:37.928679   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:37.928731   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.932746   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:37.932804   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:37.967343   71183 cri.go:89] found id: ""
	I0910 19:04:37.967372   71183 logs.go:276] 0 containers: []
	W0910 19:04:37.967382   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:37.967387   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:37.967435   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:38.004150   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:38.004173   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:38.004176   71183 cri.go:89] found id: ""
	I0910 19:04:38.004183   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:38.004227   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.008118   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.011779   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:38.011799   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:38.026386   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:38.026405   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:38.149296   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:38.149324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:38.200987   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:38.201019   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:38.243953   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:38.243983   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:38.287242   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:38.287272   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:38.329165   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:38.329193   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:38.391117   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:38.391144   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:38.464906   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:38.464944   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:38.979681   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:38.979732   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:39.015604   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:39.015636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:39.055715   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:39.055748   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:39.103920   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:39.103952   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.650354   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:41.667568   71183 api_server.go:72] duration metric: took 4m15.330735169s to wait for apiserver process to appear ...
	I0910 19:04:41.667604   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:41.667636   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:41.667682   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:41.707476   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:41.707507   71183 cri.go:89] found id: ""
	I0910 19:04:41.707520   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:41.707590   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.711732   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:41.711794   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:41.745943   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:41.745963   71183 cri.go:89] found id: ""
	I0910 19:04:41.745972   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:41.746023   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.749930   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:41.749978   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:41.790296   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:41.790318   71183 cri.go:89] found id: ""
	I0910 19:04:41.790327   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:41.790388   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.794933   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:41.794988   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:41.840669   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:41.840695   71183 cri.go:89] found id: ""
	I0910 19:04:41.840704   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:41.840762   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.845674   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:41.845729   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:41.891686   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.891708   71183 cri.go:89] found id: ""
	I0910 19:04:41.891717   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:41.891774   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.896435   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:41.896486   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:41.935802   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:41.935829   71183 cri.go:89] found id: ""
	I0910 19:04:41.935838   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:41.935882   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.940924   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:41.940979   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:41.980326   71183 cri.go:89] found id: ""
	I0910 19:04:41.980349   71183 logs.go:276] 0 containers: []
	W0910 19:04:41.980357   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:41.980362   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:41.980409   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:42.021683   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.021701   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.021704   71183 cri.go:89] found id: ""
	I0910 19:04:42.021711   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:42.021760   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.025986   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.029896   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:42.029919   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:42.101147   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:42.101182   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:42.115299   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:42.115324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:42.230472   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:42.230503   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:42.285314   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:42.285341   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:42.338243   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:42.338283   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:42.380609   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:42.380636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:42.424255   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:42.424290   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:42.481943   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:42.481972   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:42.525590   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:42.525613   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.566519   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:42.566546   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.601221   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:42.601256   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:43.021780   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:43.021816   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:45.569149   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:04:45.575146   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:04:45.576058   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:45.576077   71183 api_server.go:131] duration metric: took 3.908465286s to wait for apiserver health ...
	I0910 19:04:45.576088   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:45.576112   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:45.576159   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:45.631224   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:45.631246   71183 cri.go:89] found id: ""
	I0910 19:04:45.631254   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:45.631310   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.636343   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:45.636408   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:45.675538   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:45.675558   71183 cri.go:89] found id: ""
	I0910 19:04:45.675565   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:45.675620   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.679865   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:45.679921   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:45.724808   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:45.724835   71183 cri.go:89] found id: ""
	I0910 19:04:45.724844   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:45.724898   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.729083   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:45.729141   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:45.762943   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:45.762965   71183 cri.go:89] found id: ""
	I0910 19:04:45.762973   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:45.763022   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.766889   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:45.766935   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:45.802849   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:45.802875   71183 cri.go:89] found id: ""
	I0910 19:04:45.802883   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:45.802924   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.806796   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:45.806860   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:45.841656   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:45.841675   71183 cri.go:89] found id: ""
	I0910 19:04:45.841682   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:45.841722   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.846078   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:45.846145   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:45.883750   71183 cri.go:89] found id: ""
	I0910 19:04:45.883773   71183 logs.go:276] 0 containers: []
	W0910 19:04:45.883787   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:45.883795   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:45.883857   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:45.918786   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:45.918815   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.918822   71183 cri.go:89] found id: ""
	I0910 19:04:45.918829   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:45.918876   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.923329   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.927395   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:45.927417   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.963527   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:45.963557   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:46.364843   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:46.364886   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:46.379339   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:46.379366   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:46.483159   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:46.483190   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:46.523850   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:46.523877   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:46.574864   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:46.574905   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:46.613765   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:46.613793   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:46.659791   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:46.659819   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:46.722103   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:46.722138   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:46.794098   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:46.794140   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:46.850112   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:46.850148   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:46.899733   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:46.899770   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:44.413134   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:04:44.413215   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:44.413400   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:49.448164   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:49.448194   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.448201   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.448207   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.448216   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.448220   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.448225   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.448232   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.448239   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.448248   71183 system_pods.go:74] duration metric: took 3.872154051s to wait for pod list to return data ...
	I0910 19:04:49.448255   71183 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:49.450795   71183 default_sa.go:45] found service account: "default"
	I0910 19:04:49.450816   71183 default_sa.go:55] duration metric: took 2.553358ms for default service account to be created ...
	I0910 19:04:49.450826   71183 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:49.454993   71183 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:49.455015   71183 system_pods.go:89] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.455020   71183 system_pods.go:89] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.455024   71183 system_pods.go:89] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.455030   71183 system_pods.go:89] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.455033   71183 system_pods.go:89] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.455038   71183 system_pods.go:89] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.455047   71183 system_pods.go:89] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.455053   71183 system_pods.go:89] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.455062   71183 system_pods.go:126] duration metric: took 4.230457ms to wait for k8s-apps to be running ...
	I0910 19:04:49.455073   71183 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:49.455130   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:49.471265   71183 system_svc.go:56] duration metric: took 16.184718ms WaitForService to wait for kubelet
	I0910 19:04:49.471293   71183 kubeadm.go:582] duration metric: took 4m23.134472506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:49.471320   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:49.475529   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:49.475548   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:49.475558   71183 node_conditions.go:105] duration metric: took 4.228611ms to run NodePressure ...
	I0910 19:04:49.475567   71183 start.go:241] waiting for startup goroutines ...
	I0910 19:04:49.475577   71183 start.go:246] waiting for cluster config update ...
	I0910 19:04:49.475589   71183 start.go:255] writing updated cluster config ...
	I0910 19:04:49.475827   71183 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:49.522354   71183 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:49.524738   71183 out.go:177] * Done! kubectl is now configured to use "embed-certs-836868" cluster and "default" namespace by default
	I0910 19:04:49.413796   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:49.413967   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:59.414341   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:59.414514   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:19.415680   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:19.415950   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.417770   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:59.418015   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.418035   72122 kubeadm.go:310] 
	I0910 19:05:59.418101   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:05:59.418137   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:05:59.418143   72122 kubeadm.go:310] 
	I0910 19:05:59.418178   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:05:59.418207   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:05:59.418313   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:05:59.418321   72122 kubeadm.go:310] 
	I0910 19:05:59.418443   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:05:59.418477   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:05:59.418519   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:05:59.418527   72122 kubeadm.go:310] 
	I0910 19:05:59.418625   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:05:59.418723   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:05:59.418731   72122 kubeadm.go:310] 
	I0910 19:05:59.418869   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:05:59.418976   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:05:59.419045   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:05:59.419141   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:05:59.419152   72122 kubeadm.go:310] 
	I0910 19:05:59.420015   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:05:59.420093   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:05:59.420165   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0910 19:05:59.420289   72122 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0910 19:05:59.420339   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:06:04.848652   72122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.428289133s)
	I0910 19:06:04.848719   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:06:04.862914   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:06:04.872903   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:06:04.872920   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:06:04.872960   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:06:04.882109   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:06:04.882168   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:06:04.890962   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:06:04.899925   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:06:04.899985   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:06:04.908796   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.917123   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:06:04.917173   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.925821   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:06:04.937885   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:06:04.937963   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:06:04.948108   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:06:05.019246   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:06:05.019321   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:06:05.162639   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:06:05.162770   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:06:05.162918   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:06:05.343270   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:06:05.345092   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:06:05.345189   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:06:05.345299   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:06:05.345417   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:06:05.345497   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:06:05.345606   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:06:05.345718   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:06:05.345981   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:06:05.346367   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:06:05.346822   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:06:05.347133   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:06:05.347246   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:06:05.347346   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:06:05.536681   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:06:05.773929   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:06:05.994857   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:06:06.139145   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:06:06.154510   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:06:06.155479   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:06:06.155548   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:06:06.311520   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:06:06.314167   72122 out.go:235]   - Booting up control plane ...
	I0910 19:06:06.314311   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:06:06.320856   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:06:06.321801   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:06:06.322508   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:06:06.324744   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:06:46.327168   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:06:46.327286   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:46.327534   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:06:51.328423   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:51.328643   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:01.329028   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:01.329315   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:21.329371   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:21.329627   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328238   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:08:01.328535   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328566   72122 kubeadm.go:310] 
	I0910 19:08:01.328625   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:08:01.328688   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:08:01.328701   72122 kubeadm.go:310] 
	I0910 19:08:01.328749   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:08:01.328797   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:08:01.328941   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:08:01.328953   72122 kubeadm.go:310] 
	I0910 19:08:01.329068   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:08:01.329136   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:08:01.329177   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:08:01.329191   72122 kubeadm.go:310] 
	I0910 19:08:01.329310   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:08:01.329377   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:08:01.329383   72122 kubeadm.go:310] 
	I0910 19:08:01.329468   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:08:01.329539   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:08:01.329607   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:08:01.329667   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:08:01.329674   72122 kubeadm.go:310] 
	I0910 19:08:01.330783   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:08:01.330892   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:08:01.330963   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 19:08:01.331020   72122 kubeadm.go:394] duration metric: took 8m1.874926868s to StartCluster
	I0910 19:08:01.331061   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:08:01.331117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:08:01.385468   72122 cri.go:89] found id: ""
	I0910 19:08:01.385492   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.385499   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:08:01.385505   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:08:01.385571   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:08:01.424028   72122 cri.go:89] found id: ""
	I0910 19:08:01.424051   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.424060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:08:01.424064   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:08:01.424121   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:08:01.462946   72122 cri.go:89] found id: ""
	I0910 19:08:01.462973   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.462983   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:08:01.462991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:08:01.463045   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:08:01.498242   72122 cri.go:89] found id: ""
	I0910 19:08:01.498269   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.498278   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:08:01.498283   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:08:01.498329   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:08:01.532917   72122 cri.go:89] found id: ""
	I0910 19:08:01.532946   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.532953   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:08:01.532959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:08:01.533011   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:08:01.567935   72122 cri.go:89] found id: ""
	I0910 19:08:01.567959   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.567967   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:08:01.567973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:08:01.568027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:08:01.601393   72122 cri.go:89] found id: ""
	I0910 19:08:01.601418   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.601426   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:08:01.601432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:08:01.601489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:08:01.639307   72122 cri.go:89] found id: ""
	I0910 19:08:01.639335   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.639345   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:08:01.639358   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:08:01.639373   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:08:01.726566   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:08:01.726591   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:08:01.726614   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:08:01.839965   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:08:01.840004   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:08:01.879658   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:08:01.879687   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:08:01.939066   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:08:01.939102   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0910 19:08:01.955390   72122 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0910 19:08:01.955436   72122 out.go:270] * 
	W0910 19:08:01.955500   72122 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.955524   72122 out.go:270] * 
	W0910 19:08:01.956343   72122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 19:08:01.959608   72122 out.go:201] 
	W0910 19:08:01.960877   72122 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.960929   72122 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0910 19:08:01.960957   72122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0910 19:08:01.962345   72122 out.go:201] 
	
	
	==> CRI-O <==
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.108213966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995589108187343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c92932e-df0d-4428-86ea-c143a76445d1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.109145230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c0e84b7-459f-4fc0-8cd4-c94dad5f381d name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.109199005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c0e84b7-459f-4fc0-8cd4-c94dad5f381d name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.109737325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994813631660776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d45d79d8703a5fc2a62839ad8bb6d496ce08997cc5153453c5e9b7a59a1364,PodSandboxId:01ef94f4f5f14ac6fccd5857d26eb00e16c4ead3103026124601c7169eadb226,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994792626440686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0a8517a-170a-406e-89f5-7cc376bb0908,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100,PodSandboxId:508fb9d46dc56e54b23345f1a393f3152cddc61eb4a413035dee2892a6628d6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994790408117005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nq9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994782727181976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27,PodSandboxId:a1be56467d27e7d8e241b79081cf999e6bf06801b77512fefae22b50774058c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994782711889420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t8r9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca739fc-0169-433b-85f1
-17bf3ab538cb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade,PodSandboxId:07abf1a8ad095596d0304c9d02d6e49d826aa0cf9dbc2685801b579782a3f18d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994779085118784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a8f55c3c023cbb2065ea0b24444a9d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a,PodSandboxId:4691d75c717c5e7b65e5cbf439358cf50e21cab9b3177ce29aa134e2008bf0df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994779041992941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58ccf7e6cfdbd0ab779aad78dd3e581,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20,PodSandboxId:22e8260e37ec3ec52a162bd457c37fff320c66bc38cd18190f8f34fd2dabbc6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994779011381611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98220ab07d6f1e726ce95e161182b
884,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68,PodSandboxId:c48af6846c37e9ec88371a94931dac7b050b33b056393e008158cb0ecf7657d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994778933693513,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd88d683448b1776d4a04c84b404bf6
8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c0e84b7-459f-4fc0-8cd4-c94dad5f381d name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.148205243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10f518d9-4f92-4762-895e-75d91bda07d4 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.148292231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10f518d9-4f92-4762-895e-75d91bda07d4 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.149575425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1cf91609-7074-49e3-9193-e28b0882f14f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.150117315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995589150093497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cf91609-7074-49e3-9193-e28b0882f14f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.151058120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eaffa1ec-e57f-4baa-8546-f9184257ee7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.151133779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eaffa1ec-e57f-4baa-8546-f9184257ee7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.151327046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994813631660776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d45d79d8703a5fc2a62839ad8bb6d496ce08997cc5153453c5e9b7a59a1364,PodSandboxId:01ef94f4f5f14ac6fccd5857d26eb00e16c4ead3103026124601c7169eadb226,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994792626440686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0a8517a-170a-406e-89f5-7cc376bb0908,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100,PodSandboxId:508fb9d46dc56e54b23345f1a393f3152cddc61eb4a413035dee2892a6628d6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994790408117005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nq9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994782727181976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27,PodSandboxId:a1be56467d27e7d8e241b79081cf999e6bf06801b77512fefae22b50774058c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994782711889420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t8r9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca739fc-0169-433b-85f1
-17bf3ab538cb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade,PodSandboxId:07abf1a8ad095596d0304c9d02d6e49d826aa0cf9dbc2685801b579782a3f18d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994779085118784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a8f55c3c023cbb2065ea0b24444a9d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a,PodSandboxId:4691d75c717c5e7b65e5cbf439358cf50e21cab9b3177ce29aa134e2008bf0df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994779041992941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58ccf7e6cfdbd0ab779aad78dd3e581,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20,PodSandboxId:22e8260e37ec3ec52a162bd457c37fff320c66bc38cd18190f8f34fd2dabbc6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994779011381611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98220ab07d6f1e726ce95e161182b
884,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68,PodSandboxId:c48af6846c37e9ec88371a94931dac7b050b33b056393e008158cb0ecf7657d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994778933693513,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd88d683448b1776d4a04c84b404bf6
8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eaffa1ec-e57f-4baa-8546-f9184257ee7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.191629818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c576707-632f-47c0-82dc-47c09011a78b name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.191720507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c576707-632f-47c0-82dc-47c09011a78b name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.192668216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=708919e0-bed8-4365-8585-77eecd214282 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.193163835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995589193138566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=708919e0-bed8-4365-8585-77eecd214282 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.193620627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2539c91e-00da-4fc4-8955-3e9970f4c518 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.193692119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2539c91e-00da-4fc4-8955-3e9970f4c518 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.193937126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994813631660776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d45d79d8703a5fc2a62839ad8bb6d496ce08997cc5153453c5e9b7a59a1364,PodSandboxId:01ef94f4f5f14ac6fccd5857d26eb00e16c4ead3103026124601c7169eadb226,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994792626440686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0a8517a-170a-406e-89f5-7cc376bb0908,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100,PodSandboxId:508fb9d46dc56e54b23345f1a393f3152cddc61eb4a413035dee2892a6628d6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994790408117005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nq9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994782727181976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27,PodSandboxId:a1be56467d27e7d8e241b79081cf999e6bf06801b77512fefae22b50774058c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994782711889420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t8r9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca739fc-0169-433b-85f1
-17bf3ab538cb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade,PodSandboxId:07abf1a8ad095596d0304c9d02d6e49d826aa0cf9dbc2685801b579782a3f18d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994779085118784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a8f55c3c023cbb2065ea0b24444a9d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a,PodSandboxId:4691d75c717c5e7b65e5cbf439358cf50e21cab9b3177ce29aa134e2008bf0df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994779041992941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58ccf7e6cfdbd0ab779aad78dd3e581,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20,PodSandboxId:22e8260e37ec3ec52a162bd457c37fff320c66bc38cd18190f8f34fd2dabbc6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994779011381611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98220ab07d6f1e726ce95e161182b
884,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68,PodSandboxId:c48af6846c37e9ec88371a94931dac7b050b33b056393e008158cb0ecf7657d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994778933693513,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd88d683448b1776d4a04c84b404bf6
8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2539c91e-00da-4fc4-8955-3e9970f4c518 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.231323635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02cc3ccb-ffe5-48b0-9d43-08051a9157bc name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.231410016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02cc3ccb-ffe5-48b0-9d43-08051a9157bc name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.232633197Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=278d0de2-8633-4fc1-a73d-13486cbfef3f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.233106827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995589233085029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=278d0de2-8633-4fc1-a73d-13486cbfef3f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.233505156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba99e4ad-8dc2-4680-a61c-c841b4fe4f74 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.233576845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba99e4ad-8dc2-4680-a61c-c841b4fe4f74 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:09 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:13:09.233773390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994813631660776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d45d79d8703a5fc2a62839ad8bb6d496ce08997cc5153453c5e9b7a59a1364,PodSandboxId:01ef94f4f5f14ac6fccd5857d26eb00e16c4ead3103026124601c7169eadb226,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994792626440686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0a8517a-170a-406e-89f5-7cc376bb0908,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100,PodSandboxId:508fb9d46dc56e54b23345f1a393f3152cddc61eb4a413035dee2892a6628d6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994790408117005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nq9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994782727181976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27,PodSandboxId:a1be56467d27e7d8e241b79081cf999e6bf06801b77512fefae22b50774058c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994782711889420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t8r9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca739fc-0169-433b-85f1
-17bf3ab538cb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade,PodSandboxId:07abf1a8ad095596d0304c9d02d6e49d826aa0cf9dbc2685801b579782a3f18d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994779085118784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a8f55c3c023cbb2065ea0b24444a9d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a,PodSandboxId:4691d75c717c5e7b65e5cbf439358cf50e21cab9b3177ce29aa134e2008bf0df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994779041992941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58ccf7e6cfdbd0ab779aad78dd3e581,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20,PodSandboxId:22e8260e37ec3ec52a162bd457c37fff320c66bc38cd18190f8f34fd2dabbc6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994779011381611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98220ab07d6f1e726ce95e161182b
884,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68,PodSandboxId:c48af6846c37e9ec88371a94931dac7b050b33b056393e008158cb0ecf7657d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994778933693513,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd88d683448b1776d4a04c84b404bf6
8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba99e4ad-8dc2-4680-a61c-c841b4fe4f74 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b3e0e8df9acc9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   d28c5f4f4a378       storage-provisioner
	46d45d79d8703       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   01ef94f4f5f14       busybox
	24f8e4dfaa105       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   508fb9d46dc56       coredns-6f6b679f8f-nq9fl
	173c9f8505ac0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   d28c5f4f4a378       storage-provisioner
	48c0a781fcf34       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   a1be56467d27e       kube-proxy-4t8r9
	f3db63297412d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   07abf1a8ad095       etcd-default-k8s-diff-port-557504
	1e3f86c05b5ff       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   4691d75c717c5       kube-apiserver-default-k8s-diff-port-557504
	55624c2cb31c2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   22e8260e37ec3       kube-controller-manager-default-k8s-diff-port-557504
	1a520241ca117       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   c48af6846c37e       kube-scheduler-default-k8s-diff-port-557504
	
	
	==> coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38751 - 32244 "HINFO IN 6012458017028077328.2712800172143965829. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00959273s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-557504
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-557504
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=default-k8s-diff-port-557504
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_51_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:51:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-557504
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:13:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:10:23 +0000   Tue, 10 Sep 2024 18:51:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:10:23 +0000   Tue, 10 Sep 2024 18:51:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:10:23 +0000   Tue, 10 Sep 2024 18:51:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:10:23 +0000   Tue, 10 Sep 2024 18:59:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.54
	  Hostname:    default-k8s-diff-port-557504
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cab7798f9fc3461b8abf4234670c0a64
	  System UUID:                cab7798f-9fc3-461b-8abf-4234670c0a64
	  Boot ID:                    0813731b-96be-409b-9746-de10369ef99f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-6f6b679f8f-nq9fl                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-557504                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-557504             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-557504    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-4t8r9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-557504             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-4sfwg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-557504 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-557504 event: Registered Node default-k8s-diff-port-557504 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-557504 event: Registered Node default-k8s-diff-port-557504 in Controller
	
	
	==> dmesg <==
	[Sep10 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050803] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039782] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.805312] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.620469] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.901120] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.081591] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080108] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.201989] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.127337] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.320225] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.600724] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +0.073364] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.937239] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +4.582327] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.523630] systemd-fstab-generator[1553]: Ignoring "noauto" option for root device
	[  +3.231118] kauditd_printk_skb: 64 callbacks suppressed
	[Sep10 19:00] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] <==
	{"level":"info","ts":"2024-09-10T19:00:19.214500Z","caller":"traceutil/trace.go:171","msg":"trace[1687574643] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg; range_end:; response_count:1; response_revision:590; }","duration":"506.523941ms","start":"2024-09-10T19:00:18.707960Z","end":"2024-09-10T19:00:19.214484Z","steps":["trace[1687574643] 'agreement among raft nodes before linearized reading'  (duration: 506.35046ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:00:19.214525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T19:00:18.707928Z","time spent":"506.591216ms","remote":"127.0.0.1:53724","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4372,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" "}
	{"level":"warn","ts":"2024-09-10T19:00:19.214690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"464.38601ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T19:00:19.214724Z","caller":"traceutil/trace.go:171","msg":"trace[125019005] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:590; }","duration":"464.427547ms","start":"2024-09-10T19:00:18.750292Z","end":"2024-09-10T19:00:19.214719Z","steps":["trace[125019005] 'agreement among raft nodes before linearized reading'  (duration: 464.377577ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:00:19.699183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.774788ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14269534324033562723 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" mod_revision:579 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" value_size:4313 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-10T19:00:19.699281Z","caller":"traceutil/trace.go:171","msg":"trace[1390045946] linearizableReadLoop","detail":"{readStateIndex:630; appliedIndex:629; }","duration":"248.865828ms","start":"2024-09-10T19:00:19.450401Z","end":"2024-09-10T19:00:19.699266Z","steps":["trace[1390045946] 'read index received'  (duration: 100.52414ms)","trace[1390045946] 'applied index is now lower than readState.Index'  (duration: 148.340592ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T19:00:19.699421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.008945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-6867b74b74-4sfwg.17f3f71dc7964985\" ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2024-09-10T19:00:19.699473Z","caller":"traceutil/trace.go:171","msg":"trace[457947746] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-6867b74b74-4sfwg.17f3f71dc7964985; range_end:; response_count:1; response_revision:592; }","duration":"249.06317ms","start":"2024-09-10T19:00:19.450396Z","end":"2024-09-10T19:00:19.699459Z","steps":["trace[457947746] 'agreement among raft nodes before linearized reading'  (duration: 248.907767ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T19:00:19.699766Z","caller":"traceutil/trace.go:171","msg":"trace[2122901135] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"324.027152ms","start":"2024-09-10T19:00:19.375711Z","end":"2024-09-10T19:00:19.699739Z","steps":["trace[2122901135] 'process raft request'  (duration: 175.25945ms)","trace[2122901135] 'compare'  (duration: 147.610773ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T19:00:19.700280Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T19:00:19.375699Z","time spent":"324.121042ms","remote":"127.0.0.1:53724","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4379,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" mod_revision:579 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" value_size:4313 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" > >"}
	{"level":"warn","ts":"2024-09-10T19:00:20.188390Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"347.020312ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14269534324033562725 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-4sfwg.17f3f71dc7964985\" mod_revision:544 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-4sfwg.17f3f71dc7964985\" value_size:694 lease:5046162287178786466 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-4sfwg.17f3f71dc7964985\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-10T19:00:20.188553Z","caller":"traceutil/trace.go:171","msg":"trace[1769496425] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"480.018638ms","start":"2024-09-10T19:00:19.708524Z","end":"2024-09-10T19:00:20.188542Z","steps":["trace[1769496425] 'read index received'  (duration: 132.791947ms)","trace[1769496425] 'applied index is now lower than readState.Index'  (duration: 347.225926ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T19:00:20.188626Z","caller":"traceutil/trace.go:171","msg":"trace[104886911] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"484.964116ms","start":"2024-09-10T19:00:19.703645Z","end":"2024-09-10T19:00:20.188609Z","steps":["trace[104886911] 'process raft request'  (duration: 137.666522ms)","trace[104886911] 'compare'  (duration: 346.942274ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T19:00:20.188729Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T19:00:19.703627Z","time spent":"485.056158ms","remote":"127.0.0.1:53600","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":789,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-4sfwg.17f3f71dc7964985\" mod_revision:544 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-4sfwg.17f3f71dc7964985\" value_size:694 lease:5046162287178786466 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-4sfwg.17f3f71dc7964985\" > >"}
	{"level":"warn","ts":"2024-09-10T19:00:20.188745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"480.216067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" ","response":"range_response_count:1 size:4394"}
	{"level":"info","ts":"2024-09-10T19:00:20.188975Z","caller":"traceutil/trace.go:171","msg":"trace[1823189880] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg; range_end:; response_count:1; response_revision:593; }","duration":"480.447001ms","start":"2024-09-10T19:00:19.708520Z","end":"2024-09-10T19:00:20.188967Z","steps":["trace[1823189880] 'agreement among raft nodes before linearized reading'  (duration: 480.097892ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:00:20.189062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"438.728348ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T19:00:20.189105Z","caller":"traceutil/trace.go:171","msg":"trace[1885017505] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:593; }","duration":"438.772787ms","start":"2024-09-10T19:00:19.750325Z","end":"2024-09-10T19:00:20.189098Z","steps":["trace[1885017505] 'agreement among raft nodes before linearized reading'  (duration: 438.709147ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:00:20.189159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.383808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T19:00:20.189632Z","caller":"traceutil/trace.go:171","msg":"trace[590178216] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:593; }","duration":"307.854228ms","start":"2024-09-10T19:00:19.881769Z","end":"2024-09-10T19:00:20.189623Z","steps":["trace[590178216] 'agreement among raft nodes before linearized reading'  (duration: 307.370309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:00:20.189756Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T19:00:19.881728Z","time spent":"308.01925ms","remote":"127.0.0.1:53530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-10T19:00:20.189065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T19:00:19.708495Z","time spent":"480.561086ms","remote":"127.0.0.1:53724","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4416,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" "}
	{"level":"info","ts":"2024-09-10T19:09:40.714196Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":818}
	{"level":"info","ts":"2024-09-10T19:09:40.724236Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":818,"took":"9.752411ms","hash":1755826084,"current-db-size-bytes":2576384,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2576384,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-09-10T19:09:40.724279Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1755826084,"revision":818,"compact-revision":-1}
	
	
	==> kernel <==
	 19:13:09 up 13 min,  0 users,  load average: 0.11, 0.15, 0.09
	Linux default-k8s-diff-port-557504 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] <==
	W0910 19:09:43.042618       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:09:43.042726       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0910 19:09:43.043715       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:09:43.043778       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:10:43.044428       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:10:43.044655       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0910 19:10:43.044773       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:10:43.044955       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0910 19:10:43.045901       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:10:43.046081       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:12:43.046547       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:12:43.046727       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0910 19:12:43.046770       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:12:43.046784       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0910 19:12:43.047930       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:12:43.047987       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] <==
	E0910 19:07:45.642921       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:07:46.193270       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:08:15.649284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:08:16.201135       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:08:45.655007       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:08:46.208686       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:09:15.660931       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:09:16.215808       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:09:45.668117       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:09:46.224678       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:10:15.674171       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:10:16.232961       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:10:23.591609       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-557504"
	E0910 19:10:45.680762       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:10:46.241923       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:10:53.384418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="315.876µs"
	I0910 19:11:08.387296       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="226.959µs"
	E0910 19:11:15.687491       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:11:16.250378       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:11:45.693379       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:11:46.257239       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:12:15.699995       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:12:16.264716       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:12:45.706607       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:12:46.272358       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:59:42.978940       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:59:42.990213       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.54"]
	E0910 18:59:42.990419       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:59:43.037420       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:59:43.037566       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:59:43.037620       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:59:43.040566       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:59:43.041123       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:59:43.041319       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:59:43.043262       1 config.go:197] "Starting service config controller"
	I0910 18:59:43.043364       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:59:43.043425       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:59:43.043495       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:59:43.044261       1 config.go:326] "Starting node config controller"
	I0910 18:59:43.044304       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:59:43.143925       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:59:43.143996       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:59:43.144563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] <==
	I0910 18:59:40.302269       1 serving.go:386] Generated self-signed cert in-memory
	W0910 18:59:42.000353       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 18:59:42.000397       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 18:59:42.000407       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 18:59:42.000413       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 18:59:42.079141       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 18:59:42.079185       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:59:42.087791       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 18:59:42.088017       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 18:59:42.088055       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 18:59:42.088069       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 18:59:42.188327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 19:12:01 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:01.368603     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:12:08 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:08.550624     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995528550211840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:08 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:08.551166     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995528550211840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:15 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:15.369311     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:12:18 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:18.553332     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995538552996963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:18 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:18.553701     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995538552996963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:28 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:28.368839     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:12:28 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:28.555346     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995548554987440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:28 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:28.555389     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995548554987440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:38 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:38.384542     925 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:12:38 default-k8s-diff-port-557504 kubelet[925]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:12:38 default-k8s-diff-port-557504 kubelet[925]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:12:38 default-k8s-diff-port-557504 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:12:38 default-k8s-diff-port-557504 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:12:38 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:38.557525     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995558557248363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:38 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:38.557553     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995558557248363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:43 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:43.368791     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:12:48 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:48.559082     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995568558700450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:48 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:48.559616     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995568558700450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:55 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:55.369310     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:12:58 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:58.561317     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995578560964247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:58 default-k8s-diff-port-557504 kubelet[925]: E0910 19:12:58.561723     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995578560964247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:06 default-k8s-diff-port-557504 kubelet[925]: E0910 19:13:06.369111     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:13:08 default-k8s-diff-port-557504 kubelet[925]: E0910 19:13:08.564384     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995588563784298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:08 default-k8s-diff-port-557504 kubelet[925]: E0910 19:13:08.564708     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995588563784298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] <==
	I0910 18:59:42.861462       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0910 19:00:12.869353       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] <==
	I0910 19:00:13.766952       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 19:00:13.778228       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 19:00:13.778416       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 19:00:31.183145       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 19:00:31.184696       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-557504_f49bae0e-e086-4bed-9cbf-26a1021824f1!
	I0910 19:00:31.184900       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79717468-62c3-48f1-b324-f2d2880b2de2", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-557504_f49bae0e-e086-4bed-9cbf-26a1021824f1 became leader
	I0910 19:00:31.285422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-557504_f49bae0e-e086-4bed-9cbf-26a1021824f1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-557504 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4sfwg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-557504 describe pod metrics-server-6867b74b74-4sfwg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-557504 describe pod metrics-server-6867b74b74-4sfwg: exit status 1 (63.837185ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4sfwg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-557504 describe pod metrics-server-6867b74b74-4sfwg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-347802 -n no-preload-347802
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-10 19:13:41.269673492 +0000 UTC m=+6281.093448254
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-347802 -n no-preload-347802
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-347802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-347802 logs -n 25: (2.112854568s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-642043 sudo cat                              | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo find                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo crio                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-642043                                       | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-186737 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | disable-driver-mounts-186737                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-836868            | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-347802             | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-557504  | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-836868                 | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-432422        | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-347802                  | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-557504       | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-432422             | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:56:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:56:02.487676   72122 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:56:02.487789   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487799   72122 out.go:358] Setting ErrFile to fd 2...
	I0910 18:56:02.487804   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487953   72122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:56:02.488491   72122 out.go:352] Setting JSON to false
	I0910 18:56:02.489572   72122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5914,"bootTime":1725988648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:56:02.489637   72122 start.go:139] virtualization: kvm guest
	I0910 18:56:02.491991   72122 out.go:177] * [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:56:02.493117   72122 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:56:02.493113   72122 notify.go:220] Checking for updates...
	I0910 18:56:02.494213   72122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:56:02.495356   72122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:56:02.496370   72122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:56:02.497440   72122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:56:02.498703   72122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:56:02.500450   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:56:02.501100   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.501150   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.515836   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0910 18:56:02.516286   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.516787   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.516815   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.517116   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.517300   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.519092   72122 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 18:56:02.520121   72122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:56:02.520405   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.520436   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.534860   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I0910 18:56:02.535243   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.535688   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.535711   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.536004   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.536215   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.570682   72122 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:56:02.571710   72122 start.go:297] selected driver: kvm2
	I0910 18:56:02.571722   72122 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.571821   72122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:56:02.572465   72122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.572528   72122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:56:02.587001   72122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:56:02.587381   72122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:56:02.587417   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:56:02.587427   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:56:02.587471   72122 start.go:340] cluster config:
	{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.587599   72122 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.589116   72122 out.go:177] * Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	I0910 18:56:02.590155   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:56:02.590185   72122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:56:02.590194   72122 cache.go:56] Caching tarball of preloaded images
	I0910 18:56:02.590294   72122 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:56:02.590313   72122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0910 18:56:02.590415   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:56:02.590612   72122 start.go:360] acquireMachinesLock for old-k8s-version-432422: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:56:08.097313   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:11.169360   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:17.249255   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:20.321326   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:26.401359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:29.473351   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:35.553474   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:38.625322   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:44.705324   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:47.777408   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:53.857373   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:56.929356   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:03.009354   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:06.081346   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:12.161342   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:15.233363   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:21.313385   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:24.385281   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:30.465347   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:33.537368   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:39.617395   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:42.689359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:48.769334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:51.841388   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:57.921359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:00.993375   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:07.073343   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:10.145433   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:16.225336   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:19.297345   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:25.377291   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:28.449365   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:34.529306   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:37.601300   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:43.681334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:46.753328   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:49.757234   71529 start.go:364] duration metric: took 4m17.481092907s to acquireMachinesLock for "no-preload-347802"
	I0910 18:58:49.757299   71529 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:58:49.757316   71529 fix.go:54] fixHost starting: 
	I0910 18:58:49.757667   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:58:49.757694   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:58:49.772681   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0910 18:58:49.773067   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:58:49.773498   71529 main.go:141] libmachine: Using API Version  1
	I0910 18:58:49.773518   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:58:49.773963   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:58:49.774127   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:58:49.774279   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 18:58:49.775704   71529 fix.go:112] recreateIfNeeded on no-preload-347802: state=Stopped err=<nil>
	I0910 18:58:49.775726   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	W0910 18:58:49.775886   71529 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:58:49.777669   71529 out.go:177] * Restarting existing kvm2 VM for "no-preload-347802" ...
	I0910 18:58:49.778739   71529 main.go:141] libmachine: (no-preload-347802) Calling .Start
	I0910 18:58:49.778882   71529 main.go:141] libmachine: (no-preload-347802) Ensuring networks are active...
	I0910 18:58:49.779509   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network default is active
	I0910 18:58:49.779824   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network mk-no-preload-347802 is active
	I0910 18:58:49.780121   71529 main.go:141] libmachine: (no-preload-347802) Getting domain xml...
	I0910 18:58:49.780766   71529 main.go:141] libmachine: (no-preload-347802) Creating domain...
	I0910 18:58:50.967405   71529 main.go:141] libmachine: (no-preload-347802) Waiting to get IP...
	I0910 18:58:50.968284   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:50.968647   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:50.968726   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:50.968628   72707 retry.go:31] will retry after 197.094328ms: waiting for machine to come up
	I0910 18:58:51.167237   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.167630   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.167683   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.167603   72707 retry.go:31] will retry after 272.376855ms: waiting for machine to come up
	I0910 18:58:51.441212   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.441673   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.441698   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.441636   72707 retry.go:31] will retry after 458.172114ms: waiting for machine to come up
	I0910 18:58:51.900991   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.901464   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.901498   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.901428   72707 retry.go:31] will retry after 442.42629ms: waiting for machine to come up
	I0910 18:58:49.754913   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:58:49.754977   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755310   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 18:58:49.755335   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755513   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 18:58:49.757052   71183 machine.go:96] duration metric: took 4m37.423474417s to provisionDockerMachine
	I0910 18:58:49.757138   71183 fix.go:56] duration metric: took 4m37.44458491s for fixHost
	I0910 18:58:49.757149   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 4m37.444613055s
	W0910 18:58:49.757173   71183 start.go:714] error starting host: provision: host is not running
	W0910 18:58:49.757263   71183 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0910 18:58:49.757273   71183 start.go:729] Will try again in 5 seconds ...
	I0910 18:58:52.345053   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:52.345519   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:52.345540   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:52.345463   72707 retry.go:31] will retry after 732.353971ms: waiting for machine to come up
	I0910 18:58:53.079229   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.079686   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.079714   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.079638   72707 retry.go:31] will retry after 658.057224ms: waiting for machine to come up
	I0910 18:58:53.739313   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.739750   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.739811   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.739732   72707 retry.go:31] will retry after 910.559952ms: waiting for machine to come up
	I0910 18:58:54.651714   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:54.652075   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:54.652099   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:54.652027   72707 retry.go:31] will retry after 1.410431493s: waiting for machine to come up
	I0910 18:58:56.063996   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:56.064396   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:56.064418   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:56.064360   72707 retry.go:31] will retry after 1.795467467s: waiting for machine to come up
	I0910 18:58:54.759533   71183 start.go:360] acquireMachinesLock for embed-certs-836868: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:58:57.862130   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:57.862484   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:57.862509   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:57.862453   72707 retry.go:31] will retry after 1.450403908s: waiting for machine to come up
	I0910 18:58:59.315197   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:59.315621   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:59.315657   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:59.315566   72707 retry.go:31] will retry after 1.81005281s: waiting for machine to come up
	I0910 18:59:01.128164   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:01.128611   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:01.128642   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:01.128563   72707 retry.go:31] will retry after 3.333505805s: waiting for machine to come up
	I0910 18:59:04.464526   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:04.465004   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:04.465030   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:04.464951   72707 retry.go:31] will retry after 3.603817331s: waiting for machine to come up
	I0910 18:59:09.257584   71627 start.go:364] duration metric: took 4m27.770499275s to acquireMachinesLock for "default-k8s-diff-port-557504"
	I0910 18:59:09.257656   71627 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:09.257673   71627 fix.go:54] fixHost starting: 
	I0910 18:59:09.258100   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:09.258144   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:09.276230   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0910 18:59:09.276622   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:09.277129   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:09.277151   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:09.277489   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:09.277663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:09.277793   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:09.279006   71627 fix.go:112] recreateIfNeeded on default-k8s-diff-port-557504: state=Stopped err=<nil>
	I0910 18:59:09.279043   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	W0910 18:59:09.279178   71627 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:09.281106   71627 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-557504" ...
	I0910 18:59:08.073057   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073638   71529 main.go:141] libmachine: (no-preload-347802) Found IP for machine: 192.168.50.138
	I0910 18:59:08.073660   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has current primary IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073666   71529 main.go:141] libmachine: (no-preload-347802) Reserving static IP address...
	I0910 18:59:08.074129   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.074153   71529 main.go:141] libmachine: (no-preload-347802) Reserved static IP address: 192.168.50.138
	I0910 18:59:08.074170   71529 main.go:141] libmachine: (no-preload-347802) DBG | skip adding static IP to network mk-no-preload-347802 - found existing host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"}
	I0910 18:59:08.074179   71529 main.go:141] libmachine: (no-preload-347802) Waiting for SSH to be available...
	I0910 18:59:08.074187   71529 main.go:141] libmachine: (no-preload-347802) DBG | Getting to WaitForSSH function...
	I0910 18:59:08.076434   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076744   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.076767   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076928   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH client type: external
	I0910 18:59:08.076950   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa (-rw-------)
	I0910 18:59:08.076979   71529 main.go:141] libmachine: (no-preload-347802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:08.076992   71529 main.go:141] libmachine: (no-preload-347802) DBG | About to run SSH command:
	I0910 18:59:08.077029   71529 main.go:141] libmachine: (no-preload-347802) DBG | exit 0
	I0910 18:59:08.201181   71529 main.go:141] libmachine: (no-preload-347802) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:08.201561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetConfigRaw
	I0910 18:59:08.202195   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.204390   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204639   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.204676   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204932   71529 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/config.json ...
	I0910 18:59:08.205227   71529 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:08.205245   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:08.205464   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.207451   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207833   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.207862   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207956   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.208120   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208402   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.208584   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.208811   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.208826   71529 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:08.317392   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:08.317421   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317693   71529 buildroot.go:166] provisioning hostname "no-preload-347802"
	I0910 18:59:08.317721   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317870   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.320440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320749   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.320777   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320922   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.321092   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321295   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.321607   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.321764   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.321778   71529 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-347802 && echo "no-preload-347802" | sudo tee /etc/hostname
	I0910 18:59:08.442907   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-347802
	
	I0910 18:59:08.442932   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.445449   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445743   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.445769   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445930   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.446135   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446308   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446461   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.446642   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.446831   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.446853   71529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-347802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-347802/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-347802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:08.561710   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:08.561738   71529 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:08.561760   71529 buildroot.go:174] setting up certificates
	I0910 18:59:08.561771   71529 provision.go:84] configureAuth start
	I0910 18:59:08.561782   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.562065   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.564917   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565296   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.565318   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565468   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.567579   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567883   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.567909   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567998   71529 provision.go:143] copyHostCerts
	I0910 18:59:08.568062   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:08.568074   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:08.568155   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:08.568259   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:08.568269   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:08.568297   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:08.568362   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:08.568369   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:08.568398   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:08.568457   71529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.no-preload-347802 san=[127.0.0.1 192.168.50.138 localhost minikube no-preload-347802]
	I0910 18:59:08.635212   71529 provision.go:177] copyRemoteCerts
	I0910 18:59:08.635296   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:08.635321   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.637851   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638202   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.638227   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638392   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.638561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.638727   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.638850   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:08.723477   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:08.747854   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 18:59:08.770184   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:08.792105   71529 provision.go:87] duration metric: took 230.324534ms to configureAuth
	I0910 18:59:08.792125   71529 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:08.792306   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:08.792389   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.795139   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795414   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.795440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795580   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.795767   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.795931   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.796075   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.796201   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.796385   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.796404   71529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:09.021498   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:09.021530   71529 machine.go:96] duration metric: took 816.290576ms to provisionDockerMachine
	I0910 18:59:09.021540   71529 start.go:293] postStartSetup for "no-preload-347802" (driver="kvm2")
	I0910 18:59:09.021566   71529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:09.021587   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.021923   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:09.021951   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.024598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.024935   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.024965   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.025210   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.025416   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.025598   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.025747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.107986   71529 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:09.111947   71529 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:09.111967   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:09.112028   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:09.112098   71529 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:09.112184   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:09.121734   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:09.144116   71529 start.go:296] duration metric: took 122.562738ms for postStartSetup
	I0910 18:59:09.144159   71529 fix.go:56] duration metric: took 19.386851685s for fixHost
	I0910 18:59:09.144183   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.146816   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147237   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.147278   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147396   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.147583   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147754   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147886   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.148060   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:09.148274   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:09.148285   71529 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:09.257433   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994749.232014074
	
	I0910 18:59:09.257456   71529 fix.go:216] guest clock: 1725994749.232014074
	I0910 18:59:09.257463   71529 fix.go:229] Guest: 2024-09-10 18:59:09.232014074 +0000 UTC Remote: 2024-09-10 18:59:09.144164668 +0000 UTC m=+277.006797443 (delta=87.849406ms)
	I0910 18:59:09.257478   71529 fix.go:200] guest clock delta is within tolerance: 87.849406ms
	I0910 18:59:09.257491   71529 start.go:83] releasing machines lock for "no-preload-347802", held for 19.50021281s
	I0910 18:59:09.257522   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.257777   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:09.260357   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260690   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.260715   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260895   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261369   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261545   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261631   71529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:09.261681   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.261749   71529 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:09.261774   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.264296   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264630   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.264650   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264907   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.264992   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.265020   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.265067   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265189   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.265266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265342   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265400   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.265470   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265602   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.367236   71529 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:09.373255   71529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:09.513271   71529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:09.519091   71529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:09.519153   71529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:09.534617   71529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:09.534639   71529 start.go:495] detecting cgroup driver to use...
	I0910 18:59:09.534698   71529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:09.551186   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:09.565123   71529 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:09.565193   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:09.578892   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:09.592571   71529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:09.700953   71529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:09.831175   71529 docker.go:233] disabling docker service ...
	I0910 18:59:09.831245   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:09.845755   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:09.858961   71529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:10.008707   71529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:10.144588   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:10.158486   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:10.176399   71529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:10.176456   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.186448   71529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:10.186511   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.196600   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.206639   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.216913   71529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:10.227030   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.237962   71529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.255181   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.265618   71529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:10.275659   71529 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:10.275713   71529 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:10.288712   71529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:10.301886   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:10.415847   71529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:10.500738   71529 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:10.500829   71529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:10.506564   71529 start.go:563] Will wait 60s for crictl version
	I0910 18:59:10.506620   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.510639   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:10.553929   71529 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:10.554034   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.582508   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.622516   71529 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:09.282182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Start
	I0910 18:59:09.282345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring networks are active...
	I0910 18:59:09.282958   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network default is active
	I0910 18:59:09.283450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network mk-default-k8s-diff-port-557504 is active
	I0910 18:59:09.283810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Getting domain xml...
	I0910 18:59:09.284454   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Creating domain...
	I0910 18:59:10.513168   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting to get IP...
	I0910 18:59:10.514173   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514681   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.514587   72843 retry.go:31] will retry after 228.672382ms: waiting for machine to come up
	I0910 18:59:10.745046   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745508   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.745440   72843 retry.go:31] will retry after 329.196616ms: waiting for machine to come up
	I0910 18:59:11.075777   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076237   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076269   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.076188   72843 retry.go:31] will retry after 317.98463ms: waiting for machine to come up
	I0910 18:59:10.623864   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:10.626709   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627042   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:10.627084   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627336   71529 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:10.631579   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:10.644077   71529 kubeadm.go:883] updating cluster {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:10.644183   71529 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:10.644215   71529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:10.679225   71529 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:10.679247   71529 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:10.679332   71529 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.679346   71529 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.679384   71529 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0910 18:59:10.679395   71529 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.679472   71529 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.679336   71529 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.681147   71529 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.681183   71529 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.681196   71529 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.681189   71529 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.681232   71529 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.681304   71529 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.841312   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.848638   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.872351   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.875581   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.882457   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.894360   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0910 18:59:10.895305   71529 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0910 18:59:10.895341   71529 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.895379   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.898460   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.953614   71529 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0910 18:59:10.953659   71529 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.953706   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042770   71529 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0910 18:59:11.042837   71529 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0910 18:59:11.042862   71529 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.042873   71529 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042820   71529 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0910 18:59:11.043065   71529 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.043097   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.129993   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.130090   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.130018   71529 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0910 18:59:11.130143   71529 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.130187   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.130189   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.130206   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.130271   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.239573   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.239626   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.241780   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.241795   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.241853   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.241883   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.360008   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.360027   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.360067   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.371623   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.480504   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0910 18:59:11.480591   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.480615   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.480635   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0910 18:59:11.480725   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:11.488248   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.510860   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0910 18:59:11.510950   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0910 18:59:11.510959   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:11.511032   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:11.514065   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0910 18:59:11.514136   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:11.555358   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0910 18:59:11.555425   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0910 18:59:11.555445   71529 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555465   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:11.555491   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555497   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0910 18:59:11.578210   71529 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0910 18:59:11.578227   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0910 18:59:11.578258   71529 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.578273   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0910 18:59:11.578306   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.578345   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0910 18:59:11.578310   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0910 18:59:11.395907   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396361   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396389   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.396320   72843 retry.go:31] will retry after 511.273215ms: waiting for machine to come up
	I0910 18:59:11.909582   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910012   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910041   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.909957   72843 retry.go:31] will retry after 712.801984ms: waiting for machine to come up
	I0910 18:59:12.624608   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625042   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625083   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:12.625014   72843 retry.go:31] will retry after 873.57855ms: waiting for machine to come up
	I0910 18:59:13.499767   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500117   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500144   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:13.500071   72843 retry.go:31] will retry after 1.180667971s: waiting for machine to come up
	I0910 18:59:14.682848   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683351   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683381   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:14.683297   72843 retry.go:31] will retry after 1.211684184s: waiting for machine to come up
	I0910 18:59:15.896172   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896651   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:15.896597   72843 retry.go:31] will retry after 1.541313035s: waiting for machine to come up
	I0910 18:59:13.534642   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978971061s)
	I0910 18:59:13.534680   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0910 18:59:13.534686   71529 ssh_runner.go:235] Completed: which crictl: (1.956359959s)
	I0910 18:59:13.534704   71529 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.534753   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:13.534754   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.580670   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.439293   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439652   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:17.439607   72843 retry.go:31] will retry after 2.232253017s: waiting for machine to come up
	I0910 18:59:19.673727   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:19.674070   72843 retry.go:31] will retry after 2.324233118s: waiting for machine to come up
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.644871938s)
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690724664s)
	I0910 18:59:17.225647   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0910 18:59:17.225671   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.225676   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:17.225702   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:19.705947   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.48021773s)
	I0910 18:59:19.705982   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0910 18:59:19.706006   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706045   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.480359026s)
	I0910 18:59:19.706069   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706098   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 18:59:19.706176   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:21.666588   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960494926s)
	I0910 18:59:21.666623   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0910 18:59:21.666640   71529 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.960446302s)
	I0910 18:59:21.666648   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:21.666666   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0910 18:59:21.666699   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:22.000591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001014   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001047   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:22.000951   72843 retry.go:31] will retry after 3.327224401s: waiting for machine to come up
	I0910 18:59:25.329967   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330414   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330445   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:25.330367   72843 retry.go:31] will retry after 3.45596573s: waiting for machine to come up
	I0910 18:59:23.216195   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.549470753s)
	I0910 18:59:23.216223   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0910 18:59:23.216243   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:23.216286   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:25.077483   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.861176975s)
	I0910 18:59:25.077515   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0910 18:59:25.077547   71529 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.077640   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.919427   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0910 18:59:25.919478   71529 cache_images.go:123] Successfully loaded all cached images
	I0910 18:59:25.919486   71529 cache_images.go:92] duration metric: took 15.240223152s to LoadCachedImages
	I0910 18:59:25.919502   71529 kubeadm.go:934] updating node { 192.168.50.138 8443 v1.31.0 crio true true} ...
	I0910 18:59:25.919622   71529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-347802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:25.919710   71529 ssh_runner.go:195] Run: crio config
	I0910 18:59:25.964461   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:25.964489   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:25.964509   71529 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:25.964535   71529 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-347802 NodeName:no-preload-347802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:25.964698   71529 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-347802"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:25.964780   71529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:25.975304   71529 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:25.975371   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:25.985124   71529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 18:59:26.003355   71529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:26.020117   71529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0910 18:59:26.037026   71529 ssh_runner.go:195] Run: grep 192.168.50.138	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:26.041140   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:26.053643   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:26.175281   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:26.193153   71529 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802 for IP: 192.168.50.138
	I0910 18:59:26.193181   71529 certs.go:194] generating shared ca certs ...
	I0910 18:59:26.193203   71529 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:26.193398   71529 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:26.193452   71529 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:26.193466   71529 certs.go:256] generating profile certs ...
	I0910 18:59:26.193582   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/client.key
	I0910 18:59:26.193664   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key.93ff3787
	I0910 18:59:26.193722   71529 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key
	I0910 18:59:26.193871   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:26.193924   71529 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:26.193978   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:26.194026   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:26.194053   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:26.194083   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:26.194132   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:26.194868   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:26.231957   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:26.280213   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:26.310722   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:26.347855   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:59:26.386495   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:26.411742   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:26.435728   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:59:26.460305   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:26.484974   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:26.508782   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:26.531397   71529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:26.548219   71529 ssh_runner.go:195] Run: openssl version
	I0910 18:59:26.553969   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:26.564950   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569539   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569594   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.575677   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:26.586342   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:26.606946   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611671   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611720   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.617271   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:26.627833   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:26.638225   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642722   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642759   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.648359   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:26.659003   71529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:26.663236   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:26.668896   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:26.674346   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:26.680028   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:26.685462   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:26.691097   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:26.696620   71529 kubeadm.go:392] StartCluster: {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:26.696704   71529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:26.696746   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.733823   71529 cri.go:89] found id: ""
	I0910 18:59:26.733883   71529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:26.744565   71529 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:26.744584   71529 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:26.744620   71529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:26.754754   71529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:26.755687   71529 kubeconfig.go:125] found "no-preload-347802" server: "https://192.168.50.138:8443"
	I0910 18:59:26.757732   71529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:26.767140   71529 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.138
	I0910 18:59:26.767167   71529 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:26.767180   71529 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:26.767235   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.805555   71529 cri.go:89] found id: ""
	I0910 18:59:26.805616   71529 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:26.822806   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:26.832434   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:26.832456   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:26.832499   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:26.841225   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:26.841288   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:26.850145   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:26.859016   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:26.859070   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:26.868806   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.877814   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:26.877867   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.886985   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:26.895859   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:26.895911   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:26.905600   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:26.915716   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:27.038963   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:30.202285   72122 start.go:364] duration metric: took 3m27.611616445s to acquireMachinesLock for "old-k8s-version-432422"
	I0910 18:59:30.202346   72122 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:30.202377   72122 fix.go:54] fixHost starting: 
	I0910 18:59:30.202807   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:30.202842   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:30.222440   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0910 18:59:30.222927   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:30.223415   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:59:30.223435   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:30.223748   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:30.223905   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:30.224034   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetState
	I0910 18:59:30.225464   72122 fix.go:112] recreateIfNeeded on old-k8s-version-432422: state=Stopped err=<nil>
	I0910 18:59:30.225505   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	W0910 18:59:30.225655   72122 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:30.227698   72122 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-432422" ...
	I0910 18:59:28.790020   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790390   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Found IP for machine: 192.168.72.54
	I0910 18:59:28.790424   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has current primary IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790435   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserving static IP address...
	I0910 18:59:28.790758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserved static IP address: 192.168.72.54
	I0910 18:59:28.790780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for SSH to be available...
	I0910 18:59:28.790811   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.790839   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | skip adding static IP to network mk-default-k8s-diff-port-557504 - found existing host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"}
	I0910 18:59:28.790856   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Getting to WaitForSSH function...
	I0910 18:59:28.792644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.792947   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.792978   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.793114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH client type: external
	I0910 18:59:28.793135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa (-rw-------)
	I0910 18:59:28.793192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:28.793242   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | About to run SSH command:
	I0910 18:59:28.793272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | exit 0
	I0910 18:59:28.921644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:28.921983   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetConfigRaw
	I0910 18:59:28.922663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:28.925273   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925614   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.925639   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925884   71627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/config.json ...
	I0910 18:59:28.926061   71627 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:28.926077   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:28.926272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:28.928411   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928731   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.928758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928909   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:28.929096   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929249   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929371   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:28.929552   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:28.929722   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:28.929732   71627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:29.041454   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:29.041486   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041745   71627 buildroot.go:166] provisioning hostname "default-k8s-diff-port-557504"
	I0910 18:59:29.041766   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041965   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.044784   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.045182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045358   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.045528   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045705   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.045968   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.046158   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.046173   71627 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-557504 && echo "default-k8s-diff-port-557504" | sudo tee /etc/hostname
	I0910 18:59:29.180227   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-557504
	
	I0910 18:59:29.180257   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.182815   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183166   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.183200   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183416   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.183612   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183779   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183883   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.184053   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.184258   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.184276   71627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-557504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-557504/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-557504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:29.315908   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:29.315942   71627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:29.315981   71627 buildroot.go:174] setting up certificates
	I0910 18:59:29.315996   71627 provision.go:84] configureAuth start
	I0910 18:59:29.316013   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.316262   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:29.319207   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319580   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.319609   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.321973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322318   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.322352   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322499   71627 provision.go:143] copyHostCerts
	I0910 18:59:29.322564   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:29.322577   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:29.322647   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:29.322772   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:29.322786   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:29.322832   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:29.322938   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:29.322951   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:29.322986   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:29.323065   71627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-557504 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-557504 localhost minikube]
	I0910 18:59:29.488131   71627 provision.go:177] copyRemoteCerts
	I0910 18:59:29.488187   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:29.488210   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.491095   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491441   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.491467   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491666   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.491830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.491973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.492123   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:29.584016   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:29.614749   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0910 18:59:29.646904   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:29.677788   71627 provision.go:87] duration metric: took 361.777725ms to configureAuth
	I0910 18:59:29.677820   71627 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:29.678048   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:29.678135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.680932   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681372   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.681394   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681674   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.681868   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682175   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.682431   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.682638   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.682665   71627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:29.934027   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:29.934058   71627 machine.go:96] duration metric: took 1.007985288s to provisionDockerMachine
	I0910 18:59:29.934071   71627 start.go:293] postStartSetup for "default-k8s-diff-port-557504" (driver="kvm2")
	I0910 18:59:29.934084   71627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:29.934104   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:29.934415   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:29.934447   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.937552   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.937917   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.937948   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.938110   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.938315   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.938496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.938645   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.030842   71627 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:30.036158   71627 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:30.036180   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:30.036267   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:30.036380   71627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:30.036520   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:30.048860   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:30.075362   71627 start.go:296] duration metric: took 141.276186ms for postStartSetup
	I0910 18:59:30.075398   71627 fix.go:56] duration metric: took 20.817735357s for fixHost
	I0910 18:59:30.075421   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.078501   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.078996   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.079026   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.079195   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.079373   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079561   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079704   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.079908   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:30.080089   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:30.080102   71627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:30.202112   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994770.178719125
	
	I0910 18:59:30.202139   71627 fix.go:216] guest clock: 1725994770.178719125
	I0910 18:59:30.202149   71627 fix.go:229] Guest: 2024-09-10 18:59:30.178719125 +0000 UTC Remote: 2024-09-10 18:59:30.075402937 +0000 UTC m=+288.723404352 (delta=103.316188ms)
	I0910 18:59:30.202175   71627 fix.go:200] guest clock delta is within tolerance: 103.316188ms
	I0910 18:59:30.202184   71627 start.go:83] releasing machines lock for "default-k8s-diff-port-557504", held for 20.944552577s
	I0910 18:59:30.202221   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.202522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:30.205728   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206068   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.206101   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206267   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.206830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207100   71627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:30.207171   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.207378   71627 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:30.207399   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.209851   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210130   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210220   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210400   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210553   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210555   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210625   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210735   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210785   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.210849   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210949   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.211002   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.211132   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.317738   71627 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:30.325333   71627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:30.485483   71627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:30.492979   71627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:30.493064   71627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:30.518974   71627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:30.518998   71627 start.go:495] detecting cgroup driver to use...
	I0910 18:59:30.519192   71627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:30.539578   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:30.554986   71627 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:30.555045   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:30.570454   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:30.590125   71627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:30.738819   71627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:30.930750   71627 docker.go:233] disabling docker service ...
	I0910 18:59:30.930811   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:30.946226   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:30.961633   71627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:31.086069   71627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:31.208629   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:31.225988   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:31.248059   71627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:31.248127   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.260212   71627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:31.260296   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.271128   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.282002   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.296901   71627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:31.309739   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.325469   71627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.350404   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.366130   71627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:31.379206   71627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:31.379259   71627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:31.395015   71627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:31.406339   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:31.538783   71627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:31.656815   71627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:31.656886   71627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:31.665263   71627 start.go:563] Will wait 60s for crictl version
	I0910 18:59:31.665333   71627 ssh_runner.go:195] Run: which crictl
	I0910 18:59:31.670317   71627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:31.719549   71627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:31.719641   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.753801   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.787092   71627 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:28.257536   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.218537615s)
	I0910 18:59:28.257562   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.451173   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.516432   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.605746   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:28.605823   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.106870   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.606340   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.623814   71529 api_server.go:72] duration metric: took 1.018071553s to wait for apiserver process to appear ...
	I0910 18:59:29.623842   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:29.623864   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:29.624282   71529 api_server.go:269] stopped: https://192.168.50.138:8443/healthz: Get "https://192.168.50.138:8443/healthz": dial tcp 192.168.50.138:8443: connect: connection refused
	I0910 18:59:30.124145   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:30.228896   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .Start
	I0910 18:59:30.229066   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring networks are active...
	I0910 18:59:30.229735   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network default is active
	I0910 18:59:30.230126   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network mk-old-k8s-version-432422 is active
	I0910 18:59:30.230559   72122 main.go:141] libmachine: (old-k8s-version-432422) Getting domain xml...
	I0910 18:59:30.231206   72122 main.go:141] libmachine: (old-k8s-version-432422) Creating domain...
	I0910 18:59:31.669616   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting to get IP...
	I0910 18:59:31.670682   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.671124   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.671225   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.671101   72995 retry.go:31] will retry after 285.109621ms: waiting for machine to come up
	I0910 18:59:31.957711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.958140   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.958169   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.958103   72995 retry.go:31] will retry after 306.703176ms: waiting for machine to come up
	I0910 18:59:32.266797   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.267299   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.267333   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.267226   72995 retry.go:31] will retry after 327.953362ms: waiting for machine to come up
	I0910 18:59:32.494151   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.494177   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.494193   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.550283   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.550317   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.624486   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.646548   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:32.646583   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.124697   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.139775   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.139814   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.623998   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.632392   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.632430   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:34.123979   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:34.133552   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 18:59:34.143511   71529 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:34.143543   71529 api_server.go:131] duration metric: took 4.519693435s to wait for apiserver health ...
	I0910 18:59:34.143552   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:34.143558   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:34.145562   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:31.788472   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:31.791698   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792063   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:31.792102   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792342   71627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:31.798045   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:31.814552   71627 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:31.814718   71627 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:31.814775   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:31.863576   71627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:31.863655   71627 ssh_runner.go:195] Run: which lz4
	I0910 18:59:31.868776   71627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:31.874162   71627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:31.874194   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 18:59:33.358271   71627 crio.go:462] duration metric: took 1.489531006s to copy over tarball
	I0910 18:59:33.358356   71627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:35.759805   71627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.401424942s)
	I0910 18:59:35.759833   71627 crio.go:469] duration metric: took 2.401529016s to extract the tarball
	I0910 18:59:35.759842   71627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:35.797349   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:35.849544   71627 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:59:35.849571   71627 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:59:35.849583   71627 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.0 crio true true} ...
	I0910 18:59:35.849706   71627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-557504 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:35.849783   71627 ssh_runner.go:195] Run: crio config
	I0910 18:59:35.896486   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:35.896514   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:35.896534   71627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:35.896556   71627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-557504 NodeName:default-k8s-diff-port-557504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:35.896707   71627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-557504"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:35.896777   71627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:35.907249   71627 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:35.907337   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:35.917196   71627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0910 18:59:35.935072   71627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:35.953823   71627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0910 18:59:35.970728   71627 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:35.974648   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:35.986487   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:36.144443   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:36.164942   71627 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504 for IP: 192.168.72.54
	I0910 18:59:36.164972   71627 certs.go:194] generating shared ca certs ...
	I0910 18:59:36.164990   71627 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:36.165172   71627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:36.165242   71627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:36.165255   71627 certs.go:256] generating profile certs ...
	I0910 18:59:36.165382   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/client.key
	I0910 18:59:36.165460   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key.5cc31a18
	I0910 18:59:36.165505   71627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key
	I0910 18:59:36.165640   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:36.165680   71627 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:36.165700   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:36.165733   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:36.165770   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:36.165803   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:36.165874   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:36.166687   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:36.203302   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:36.230599   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:36.269735   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:36.311674   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0910 18:59:36.354614   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:59:36.379082   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:34.146903   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:34.163037   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:34.189830   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:34.200702   71529 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:34.200751   71529 system_pods.go:61] "coredns-6f6b679f8f-54rpl" [2e301d43-a54a-4836-abf8-a45f5bc15889] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:34.200762   71529 system_pods.go:61] "etcd-no-preload-347802" [0fdffb97-72c6-4588-9593-46bcbed0a9fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:34.200773   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [3cf5abac-1d94-4ee2-a962-9daad308ec8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:34.200782   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6769757d-57fd-46c8-8f78-d20f80e592d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:34.200788   71529 system_pods.go:61] "kube-proxy-7v9n8" [d01842ad-3dae-49e1-8570-db9bcf4d0afc] Running
	I0910 18:59:34.200797   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [20e59c6b-4387-4dd0-b242-78d107775275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:34.200804   71529 system_pods.go:61] "metrics-server-6867b74b74-w8rqv" [52535081-4503-4136-963d-6b2db6c0224e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:34.200809   71529 system_pods.go:61] "storage-provisioner" [9f7c0178-7194-4c73-95a4-5a3c0091f3ac] Running
	I0910 18:59:34.200816   71529 system_pods.go:74] duration metric: took 10.965409ms to wait for pod list to return data ...
	I0910 18:59:34.200857   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:34.204544   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:34.204568   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:34.204580   71529 node_conditions.go:105] duration metric: took 3.714534ms to run NodePressure ...
	I0910 18:59:34.204597   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:34.487106   71529 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491817   71529 kubeadm.go:739] kubelet initialised
	I0910 18:59:34.491838   71529 kubeadm.go:740] duration metric: took 4.708046ms waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491845   71529 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:34.496604   71529 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.501535   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501553   71529 pod_ready.go:82] duration metric: took 4.927724ms for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.501561   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501567   71529 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.505473   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505491   71529 pod_ready.go:82] duration metric: took 3.917111ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.505499   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505507   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.510025   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510043   71529 pod_ready.go:82] duration metric: took 4.522609ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.510050   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510056   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:36.519023   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:32.597017   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.597589   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.597616   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.597554   72995 retry.go:31] will retry after 448.654363ms: waiting for machine to come up
	I0910 18:59:33.048100   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.048559   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.048590   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.048478   72995 retry.go:31] will retry after 654.829574ms: waiting for machine to come up
	I0910 18:59:33.704902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.705446   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.705475   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.705363   72995 retry.go:31] will retry after 610.514078ms: waiting for machine to come up
	I0910 18:59:34.316978   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:34.317481   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:34.317503   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:34.317430   72995 retry.go:31] will retry after 1.125805817s: waiting for machine to come up
	I0910 18:59:35.444880   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:35.445369   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:35.445394   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:35.445312   72995 retry.go:31] will retry after 1.484426931s: waiting for machine to come up
	I0910 18:59:36.931028   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:36.931568   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:36.931613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:36.931524   72995 retry.go:31] will retry after 1.819998768s: waiting for machine to come up
	I0910 18:59:36.403353   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:36.427345   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:36.452765   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:36.485795   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:36.512944   71627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:36.532454   71627 ssh_runner.go:195] Run: openssl version
	I0910 18:59:36.538449   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:36.550806   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555761   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555819   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.562430   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:36.573730   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:36.584987   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589551   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589615   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.595496   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:36.607821   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:36.620298   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624888   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624939   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.630534   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:36.641657   71627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:36.646317   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:36.652748   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:36.661166   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:36.670240   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:36.676776   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:36.686442   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:36.693233   71627 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:36.693351   71627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:36.693414   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.743159   71627 cri.go:89] found id: ""
	I0910 18:59:36.743256   71627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:36.754428   71627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:36.754451   71627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:36.754505   71627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:36.765126   71627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:36.766213   71627 kubeconfig.go:125] found "default-k8s-diff-port-557504" server: "https://192.168.72.54:8444"
	I0910 18:59:36.768428   71627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:36.778678   71627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I0910 18:59:36.778715   71627 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:36.778728   71627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:36.778779   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.824031   71627 cri.go:89] found id: ""
	I0910 18:59:36.824107   71627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:36.840585   71627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:36.851445   71627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:36.851462   71627 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:36.851508   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0910 18:59:36.860630   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:36.860682   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:36.869973   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0910 18:59:36.880034   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:36.880099   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:36.889684   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.898786   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:36.898870   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.908328   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0910 18:59:36.917272   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:36.917334   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:36.928923   71627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:36.940238   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.079143   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.945317   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.157807   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.245283   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.353653   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:38.353746   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:38.854791   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.354743   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.409511   71627 api_server.go:72] duration metric: took 1.055855393s to wait for apiserver process to appear ...
	I0910 18:59:39.409543   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:39.409566   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.410104   71627 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I0910 18:59:39.909665   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.018802   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:41.517911   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:38.753463   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:38.754076   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:38.754107   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:38.754019   72995 retry.go:31] will retry after 2.258214375s: waiting for machine to come up
	I0910 18:59:41.013524   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:41.013988   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:41.014011   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:41.013910   72995 retry.go:31] will retry after 2.030553777s: waiting for machine to come up
	I0910 18:59:41.976133   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:41.976166   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:41.976179   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.080631   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.080674   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.409865   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.421093   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.421174   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.910272   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.914729   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.914757   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:43.410280   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:43.414731   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 18:59:43.421135   71627 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:43.421163   71627 api_server.go:131] duration metric: took 4.011612782s to wait for apiserver health ...
	I0910 18:59:43.421172   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:43.421178   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:43.423063   71627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:43.424278   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:43.434823   71627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:43.461604   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:43.477566   71627 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:43.477592   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:43.477600   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:43.477606   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:43.477616   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:43.477623   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 18:59:43.477631   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:43.477638   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:43.477648   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 18:59:43.477658   71627 system_pods.go:74] duration metric: took 16.035701ms to wait for pod list to return data ...
	I0910 18:59:43.477673   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:43.485818   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:43.485840   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:43.485850   71627 node_conditions.go:105] duration metric: took 8.173642ms to run NodePressure ...
	I0910 18:59:43.485864   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:43.752422   71627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756713   71627 kubeadm.go:739] kubelet initialised
	I0910 18:59:43.756735   71627 kubeadm.go:740] duration metric: took 4.285787ms waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756744   71627 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:43.762384   71627 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.767080   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767099   71627 pod_ready.go:82] duration metric: took 4.695864ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.767109   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767116   71627 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.772560   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772579   71627 pod_ready.go:82] duration metric: took 5.453737ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.772588   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772593   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.776328   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776345   71627 pod_ready.go:82] duration metric: took 3.745149ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.776352   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776357   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.865825   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865850   71627 pod_ready.go:82] duration metric: took 89.48636ms for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.865862   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865868   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.264892   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264922   71627 pod_ready.go:82] duration metric: took 399.047611ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.264932   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264938   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.665376   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665402   71627 pod_ready.go:82] duration metric: took 400.457184ms for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.665413   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665418   71627 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:45.065696   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065724   71627 pod_ready.go:82] duration metric: took 400.298527ms for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:45.065736   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065743   71627 pod_ready.go:39] duration metric: took 1.308988307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:45.065759   71627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:59:45.077813   71627 ops.go:34] apiserver oom_adj: -16
	I0910 18:59:45.077838   71627 kubeadm.go:597] duration metric: took 8.323378955s to restartPrimaryControlPlane
	I0910 18:59:45.077846   71627 kubeadm.go:394] duration metric: took 8.384626167s to StartCluster
	I0910 18:59:45.077860   71627 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.077980   71627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:45.079979   71627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.080304   71627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 18:59:45.080399   71627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 18:59:45.080478   71627 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080510   71627 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080506   71627 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-557504"
	W0910 18:59:45.080523   71627 addons.go:243] addon storage-provisioner should already be in state true
	I0910 18:59:45.080519   71627 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080553   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080568   71627 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080568   71627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-557504"
	W0910 18:59:45.080582   71627 addons.go:243] addon metrics-server should already be in state true
	I0910 18:59:45.080529   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:45.080608   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080906   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080932   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.080989   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080994   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081905   71627 out.go:177] * Verifying Kubernetes components...
	I0910 18:59:45.083206   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:45.096019   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0910 18:59:45.096288   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0910 18:59:45.096453   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096730   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096984   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097012   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097243   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097273   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097401   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.097596   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.097678   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0910 18:59:45.097693   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.098049   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.098464   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.098504   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.099185   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.099207   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.099592   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.100125   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.100166   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.101159   71627 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-557504"
	W0910 18:59:45.101175   71627 addons.go:243] addon default-storageclass should already be in state true
	I0910 18:59:45.101203   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.101501   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.101537   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.114823   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0910 18:59:45.115253   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.115363   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0910 18:59:45.115737   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.115759   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.115795   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.116106   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.116244   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.116270   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.116289   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.116696   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.117290   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.117327   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.117546   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0910 18:59:45.117879   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.118496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.118631   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.118643   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.118949   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.119107   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.120353   71627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 18:59:45.120775   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.121685   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 18:59:45.121699   71627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 18:59:45.121718   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.122500   71627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:45.123762   71627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.123778   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:59:45.123792   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.125345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.125926   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.126161   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.126357   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.125943   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.126548   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.126661   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.127075   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127507   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.127522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127675   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.127810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.127905   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.127997   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.132978   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0910 18:59:45.133303   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.133757   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.133779   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.134043   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.134188   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.135712   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.135917   71627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.135928   71627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:59:45.135938   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.138375   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138616   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.138629   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138768   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.138937   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.139054   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.139181   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.293036   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:45.311747   71627 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:45.425820   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 18:59:45.425852   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 18:59:45.430783   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.441452   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.481245   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 18:59:45.481268   71627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 18:59:45.573348   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:45.573373   71627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 18:59:45.634830   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:46.589194   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147713188s)
	I0910 18:59:46.589253   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589266   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589284   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589311   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589321   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158508631s)
	I0910 18:59:46.589343   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589355   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589723   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589729   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589730   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589736   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589738   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589741   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589751   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589752   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589761   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589774   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589816   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589755   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589852   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589961   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589971   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.590192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.590207   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.590220   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591675   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.591692   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591702   71627 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:46.595906   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.595921   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.596105   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.596126   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.598033   71627 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0910 18:59:44.023282   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:46.516768   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:47.016400   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.016423   71529 pod_ready.go:82] duration metric: took 12.506359172s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.016435   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020809   71529 pod_ready.go:93] pod "kube-proxy-7v9n8" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.020827   71529 pod_ready.go:82] duration metric: took 4.386051ms for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020836   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.046937   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:43.047363   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:43.047393   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:43.047314   72995 retry.go:31] will retry after 2.233047134s: waiting for machine to come up
	I0910 18:59:45.282610   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:45.283104   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:45.283133   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:45.283026   72995 retry.go:31] will retry after 4.238676711s: waiting for machine to come up
	I0910 18:59:51.182133   71183 start.go:364] duration metric: took 56.422548201s to acquireMachinesLock for "embed-certs-836868"
	I0910 18:59:51.182195   71183 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:51.182206   71183 fix.go:54] fixHost starting: 
	I0910 18:59:51.182600   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:51.182637   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:51.198943   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0910 18:59:51.199345   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:51.199803   71183 main.go:141] libmachine: Using API Version  1
	I0910 18:59:51.199828   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:51.200153   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:51.200364   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 18:59:51.200493   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 18:59:51.202100   71183 fix.go:112] recreateIfNeeded on embed-certs-836868: state=Stopped err=<nil>
	I0910 18:59:51.202123   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	W0910 18:59:51.202286   71183 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:51.204028   71183 out.go:177] * Restarting existing kvm2 VM for "embed-certs-836868" ...
	I0910 18:59:46.599125   71627 addons.go:510] duration metric: took 1.518742666s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0910 18:59:47.316003   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.316691   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.027374   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:49.027393   71529 pod_ready.go:82] duration metric: took 2.006551523s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:49.027403   71529 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:51.034568   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:51.205180   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Start
	I0910 18:59:51.205332   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring networks are active...
	I0910 18:59:51.205952   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network default is active
	I0910 18:59:51.206322   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network mk-embed-certs-836868 is active
	I0910 18:59:51.206717   71183 main.go:141] libmachine: (embed-certs-836868) Getting domain xml...
	I0910 18:59:51.207430   71183 main.go:141] libmachine: (embed-certs-836868) Creating domain...
	I0910 18:59:49.526000   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.526536   72122 main.go:141] libmachine: (old-k8s-version-432422) Found IP for machine: 192.168.61.51
	I0910 18:59:49.526558   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserving static IP address...
	I0910 18:59:49.526569   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has current primary IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.527018   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.527063   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | skip adding static IP to network mk-old-k8s-version-432422 - found existing host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"}
	I0910 18:59:49.527084   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserved static IP address: 192.168.61.51
	I0910 18:59:49.527099   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting for SSH to be available...
	I0910 18:59:49.527113   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Getting to WaitForSSH function...
	I0910 18:59:49.529544   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.529962   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.529987   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.530143   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH client type: external
	I0910 18:59:49.530170   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa (-rw-------)
	I0910 18:59:49.530195   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:49.530208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | About to run SSH command:
	I0910 18:59:49.530245   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | exit 0
	I0910 18:59:49.656944   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:49.657307   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:59:49.657926   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:49.660332   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660689   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.660711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660992   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:59:49.661238   72122 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:49.661259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:49.661480   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.663824   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.664236   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664370   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.664565   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664712   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664887   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.665103   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.665392   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.665406   72122 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:49.769433   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:49.769468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769716   72122 buildroot.go:166] provisioning hostname "old-k8s-version-432422"
	I0910 18:59:49.769740   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769918   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.772324   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772710   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.772736   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772875   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.773061   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773245   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773384   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.773554   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.773751   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.773764   72122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-432422 && echo "old-k8s-version-432422" | sudo tee /etc/hostname
	I0910 18:59:49.891230   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-432422
	
	I0910 18:59:49.891259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.894272   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894641   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.894683   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894820   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.894983   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895210   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.895330   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.895540   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.895559   72122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-432422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-432422/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-432422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:50.011767   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:50.011795   72122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:50.011843   72122 buildroot.go:174] setting up certificates
	I0910 18:59:50.011854   72122 provision.go:84] configureAuth start
	I0910 18:59:50.011866   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:50.012185   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:50.014947   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015352   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.015388   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015549   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.017712   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018002   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.018036   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018193   72122 provision.go:143] copyHostCerts
	I0910 18:59:50.018251   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:50.018265   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:50.018337   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:50.018481   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:50.018491   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:50.018513   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:50.018585   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:50.018594   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:50.018612   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:50.018667   72122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-432422 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-432422]
	I0910 18:59:50.528798   72122 provision.go:177] copyRemoteCerts
	I0910 18:59:50.528864   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:50.528900   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.532154   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532576   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.532613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532765   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.532995   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.533205   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.533370   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:50.620169   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0910 18:59:50.647163   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:50.679214   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:50.704333   72122 provision.go:87] duration metric: took 692.46607ms to configureAuth
	I0910 18:59:50.704360   72122 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:50.704545   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:59:50.704639   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.707529   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.707903   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.707931   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.708082   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.708297   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708463   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708641   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.708786   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:50.708954   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:50.708969   72122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:50.935375   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:50.935403   72122 machine.go:96] duration metric: took 1.274152353s to provisionDockerMachine
	I0910 18:59:50.935414   72122 start.go:293] postStartSetup for "old-k8s-version-432422" (driver="kvm2")
	I0910 18:59:50.935424   72122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:50.935448   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:50.935763   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:50.935796   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.938507   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.938865   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.938902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.939008   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.939198   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.939529   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.939689   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.024726   72122 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:51.029522   72122 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:51.029547   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:51.029632   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:51.029734   72122 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:51.029848   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:51.042454   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:51.068748   72122 start.go:296] duration metric: took 133.318275ms for postStartSetup
	I0910 18:59:51.068792   72122 fix.go:56] duration metric: took 20.866428313s for fixHost
	I0910 18:59:51.068816   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.071533   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.071894   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.071921   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.072072   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.072264   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072616   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.072784   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:51.072938   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:51.072948   72122 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:51.181996   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994791.151610055
	
	I0910 18:59:51.182016   72122 fix.go:216] guest clock: 1725994791.151610055
	I0910 18:59:51.182024   72122 fix.go:229] Guest: 2024-09-10 18:59:51.151610055 +0000 UTC Remote: 2024-09-10 18:59:51.068796263 +0000 UTC m=+228.614166738 (delta=82.813792ms)
	I0910 18:59:51.182048   72122 fix.go:200] guest clock delta is within tolerance: 82.813792ms
	I0910 18:59:51.182055   72122 start.go:83] releasing machines lock for "old-k8s-version-432422", held for 20.979733564s
	I0910 18:59:51.182094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.182331   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:51.184857   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185183   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.185212   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185346   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.185840   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186006   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186079   72122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:51.186143   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.186215   72122 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:51.186238   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.189304   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189674   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.189698   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189765   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189879   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190057   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190212   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190230   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.190255   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.190358   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.190470   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190652   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190817   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190948   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.296968   72122 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:51.303144   72122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:51.447027   72122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:51.454963   72122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:51.455032   72122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:51.474857   72122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:51.474882   72122 start.go:495] detecting cgroup driver to use...
	I0910 18:59:51.474957   72122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:51.490457   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:51.504502   72122 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:51.504569   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:51.523331   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:51.543438   72122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:51.678734   72122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:51.831736   72122 docker.go:233] disabling docker service ...
	I0910 18:59:51.831804   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:51.846805   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:51.865771   72122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:52.012922   72122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:52.161595   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:52.180034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:52.200984   72122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0910 18:59:52.201041   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.211927   72122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:52.211989   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.223601   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.234211   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.246209   72122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:52.264079   72122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:52.277144   72122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:52.277204   72122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:52.292683   72122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:52.304601   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:52.421971   72122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:52.544386   72122 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:52.544459   72122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:52.551436   72122 start.go:563] Will wait 60s for crictl version
	I0910 18:59:52.551487   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:52.555614   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:52.598031   72122 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:52.598128   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.629578   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.662403   72122 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0910 18:59:51.815436   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:52.816775   71627 node_ready.go:49] node "default-k8s-diff-port-557504" has status "Ready":"True"
	I0910 18:59:52.816809   71627 node_ready.go:38] duration metric: took 7.505015999s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:52.816821   71627 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:52.823528   71627 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829667   71627 pod_ready.go:93] pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.829688   71627 pod_ready.go:82] duration metric: took 6.135159ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829696   71627 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833912   71627 pod_ready.go:93] pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.833933   71627 pod_ready.go:82] duration metric: took 4.231672ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833942   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838863   71627 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.838883   71627 pod_ready.go:82] duration metric: took 4.934379ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838897   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851413   71627 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:53.851437   71627 pod_ready.go:82] duration metric: took 1.012531075s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851447   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020886   71627 pod_ready.go:93] pod "kube-proxy-4t8r9" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:54.020910   71627 pod_ready.go:82] duration metric: took 169.456474ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020926   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217416   71627 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:55.217440   71627 pod_ready.go:82] duration metric: took 1.196506075s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217451   71627 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.036769   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:55.536544   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:52.544041   71183 main.go:141] libmachine: (embed-certs-836868) Waiting to get IP...
	I0910 18:59:52.545001   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.545522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.545586   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.545494   73202 retry.go:31] will retry after 260.451431ms: waiting for machine to come up
	I0910 18:59:52.807914   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.808351   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.808377   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.808307   73202 retry.go:31] will retry after 340.526757ms: waiting for machine to come up
	I0910 18:59:53.150854   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.151446   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.151476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.151404   73202 retry.go:31] will retry after 470.620322ms: waiting for machine to come up
	I0910 18:59:53.624169   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.624709   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.624747   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.624657   73202 retry.go:31] will retry after 529.186273ms: waiting for machine to come up
	I0910 18:59:54.155156   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.155644   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.155673   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.155599   73202 retry.go:31] will retry after 575.877001ms: waiting for machine to come up
	I0910 18:59:54.733522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.734049   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.734092   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.734000   73202 retry.go:31] will retry after 577.385946ms: waiting for machine to come up
	I0910 18:59:55.312705   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:55.313087   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:55.313114   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:55.313059   73202 retry.go:31] will retry after 735.788809ms: waiting for machine to come up
	I0910 18:59:56.049771   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:56.050272   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:56.050306   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:56.050224   73202 retry.go:31] will retry after 1.433431053s: waiting for machine to come up
	I0910 18:59:52.663465   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:52.666401   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.666796   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:52.666843   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.667002   72122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:52.672338   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:52.688427   72122 kubeadm.go:883] updating cluster {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:52.688559   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:59:52.688623   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:52.740370   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:52.740447   72122 ssh_runner.go:195] Run: which lz4
	I0910 18:59:52.744925   72122 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:52.749840   72122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:52.749872   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0910 18:59:54.437031   72122 crio.go:462] duration metric: took 1.692132914s to copy over tarball
	I0910 18:59:54.437124   72122 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:57.462705   72122 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025545297s)
	I0910 18:59:57.462743   72122 crio.go:469] duration metric: took 3.025690485s to extract the tarball
	I0910 18:59:57.462753   72122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:57.223959   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:59.224657   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:01.224783   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:58.035610   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:00.535779   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:57.485417   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:57.485870   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:57.485896   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:57.485815   73202 retry.go:31] will retry after 1.638565814s: waiting for machine to come up
	I0910 18:59:59.126134   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:59.126625   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:59.126657   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:59.126576   73202 retry.go:31] will retry after 2.127929201s: waiting for machine to come up
	I0910 19:00:01.256121   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:01.256665   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:01.256694   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:01.256612   73202 retry.go:31] will retry after 2.530100505s: waiting for machine to come up
	I0910 18:59:57.508817   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:57.551327   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:57.551350   72122 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:57.551434   72122 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.551704   72122 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.551776   72122 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.552000   72122 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.551807   72122 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.551846   72122 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.551714   72122 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0910 18:59:57.551917   72122 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.553642   72122 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.553660   72122 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.553917   72122 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.553935   72122 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 18:59:57.554014   72122 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.554160   72122 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.554376   72122 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.554662   72122 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.726191   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.742799   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.745264   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.753214   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.768122   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.770828   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 18:59:57.774835   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.807657   72122 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0910 18:59:57.807693   72122 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.807733   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908662   72122 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0910 18:59:57.908678   72122 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0910 18:59:57.908707   72122 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.908711   72122 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.908759   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908760   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920214   72122 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0910 18:59:57.920248   72122 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0910 18:59:57.920258   72122 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.920280   72122 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.920304   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920313   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.937914   72122 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0910 18:59:57.937952   72122 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.937958   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.937999   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.938033   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.938006   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.938073   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.938063   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.938157   72122 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0910 18:59:57.938185   72122 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0910 18:59:57.938215   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:58.044082   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.044139   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.044146   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.044173   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.045813   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.045816   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.045849   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.198804   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.198841   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.198881   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.198944   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.198978   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.199000   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.199081   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.353153   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0910 18:59:58.353217   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0910 18:59:58.353232   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0910 18:59:58.353277   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0910 18:59:58.359353   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.359363   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0910 18:59:58.359421   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.386872   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:58.407734   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0910 18:59:58.425479   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0910 18:59:58.553340   72122 cache_images.go:92] duration metric: took 1.001972084s to LoadCachedImages
	W0910 18:59:58.553438   72122 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0910 18:59:58.553455   72122 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0910 18:59:58.553634   72122 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-432422 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:58.553722   72122 ssh_runner.go:195] Run: crio config
	I0910 18:59:58.605518   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:59:58.605542   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:58.605554   72122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:58.605577   72122 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-432422 NodeName:old-k8s-version-432422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 18:59:58.605744   72122 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-432422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:58.605814   72122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0910 18:59:58.618033   72122 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:58.618096   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:58.629175   72122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0910 18:59:58.653830   72122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:58.679797   72122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0910 18:59:58.698692   72122 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:58.702565   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:58.715128   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:58.858262   72122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:58.876681   72122 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422 for IP: 192.168.61.51
	I0910 18:59:58.876719   72122 certs.go:194] generating shared ca certs ...
	I0910 18:59:58.876740   72122 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:58.876921   72122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:58.876983   72122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:58.876996   72122 certs.go:256] generating profile certs ...
	I0910 18:59:58.877129   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key
	I0910 18:59:58.877210   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b
	I0910 18:59:58.877264   72122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key
	I0910 18:59:58.877424   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:58.877473   72122 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:58.877491   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:58.877528   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:58.877560   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:58.877591   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:58.877648   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:58.878410   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:58.936013   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:58.969736   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:59.017414   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:59.063599   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 18:59:59.093934   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:59.138026   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:59.166507   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:59.196972   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:59.223596   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:59.250627   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:59.279886   72122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:59.300491   72122 ssh_runner.go:195] Run: openssl version
	I0910 18:59:59.306521   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:59.317238   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321625   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321682   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.327532   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:59.339028   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:59.350578   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355025   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355106   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.360701   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:59.375040   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:59.389867   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395829   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395890   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.402425   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:59.414077   72122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:59.418909   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:59.425061   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:59.431213   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:59.437581   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:59.443603   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:59.449820   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:59.456100   72122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:59.456189   72122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:59.456234   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.497167   72122 cri.go:89] found id: ""
	I0910 18:59:59.497227   72122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:59.508449   72122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:59.508474   72122 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:59.508527   72122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:59.521416   72122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:59.522489   72122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:59.523125   72122 kubeconfig.go:62] /home/jenkins/minikube-integration/19598-5973/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-432422" cluster setting kubeconfig missing "old-k8s-version-432422" context setting]
	I0910 18:59:59.524107   72122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:59.637793   72122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:59.651879   72122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0910 18:59:59.651916   72122 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:59.651930   72122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:59.651989   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.691857   72122 cri.go:89] found id: ""
	I0910 18:59:59.691922   72122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:59.708610   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:59.718680   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:59.718702   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:59.718755   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:59.729965   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:59.730028   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:59.740037   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:59.750640   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:59.750706   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:59.762436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.773456   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:59.773522   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.783438   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:59.792996   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:59.793056   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:59.805000   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:59.815384   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:59.955068   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:00.842403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.102530   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.212897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.340128   72122 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:01.340217   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:01.841004   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:02.340913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.225898   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.723882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.034295   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.034431   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.790275   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:03.790710   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:03.790736   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:03.790662   73202 retry.go:31] will retry after 3.202952028s: waiting for machine to come up
	I0910 19:00:06.995302   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:06.996124   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:06.996149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:06.996073   73202 retry.go:31] will retry after 3.076425277s: waiting for machine to come up
	I0910 19:00:02.840935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.340938   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.840669   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.341213   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.841274   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.340698   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.841152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.340425   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.841001   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.341198   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.724121   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.223744   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:07.533428   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:09.534830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.033655   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.075125   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075606   71183 main.go:141] libmachine: (embed-certs-836868) Found IP for machine: 192.168.39.107
	I0910 19:00:10.075634   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has current primary IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075643   71183 main.go:141] libmachine: (embed-certs-836868) Reserving static IP address...
	I0910 19:00:10.076046   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.076075   71183 main.go:141] libmachine: (embed-certs-836868) DBG | skip adding static IP to network mk-embed-certs-836868 - found existing host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"}
	I0910 19:00:10.076103   71183 main.go:141] libmachine: (embed-certs-836868) Reserved static IP address: 192.168.39.107
	I0910 19:00:10.076122   71183 main.go:141] libmachine: (embed-certs-836868) Waiting for SSH to be available...
	I0910 19:00:10.076133   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Getting to WaitForSSH function...
	I0910 19:00:10.078039   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078327   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.078352   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078452   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH client type: external
	I0910 19:00:10.078475   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa (-rw-------)
	I0910 19:00:10.078514   71183 main.go:141] libmachine: (embed-certs-836868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 19:00:10.078527   71183 main.go:141] libmachine: (embed-certs-836868) DBG | About to run SSH command:
	I0910 19:00:10.078548   71183 main.go:141] libmachine: (embed-certs-836868) DBG | exit 0
	I0910 19:00:10.201403   71183 main.go:141] libmachine: (embed-certs-836868) DBG | SSH cmd err, output: <nil>: 
	I0910 19:00:10.201748   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetConfigRaw
	I0910 19:00:10.202405   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.204760   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205130   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.205160   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205408   71183 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/config.json ...
	I0910 19:00:10.205697   71183 machine.go:93] provisionDockerMachine start ...
	I0910 19:00:10.205714   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.205924   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.208095   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208394   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.208418   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208534   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.208712   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208856   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208958   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.209193   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.209412   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.209427   71183 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:00:10.313247   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:00:10.313278   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313556   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 19:00:10.313584   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313765   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.316135   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316569   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.316592   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316739   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.316893   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317046   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317165   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.317288   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.317490   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.317506   71183 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-836868 && echo "embed-certs-836868" | sudo tee /etc/hostname
	I0910 19:00:10.433585   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-836868
	
	I0910 19:00:10.433608   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.436076   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436407   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.436440   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.436826   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.436972   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.437146   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.437314   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.437480   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.437495   71183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-836868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-836868/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-836868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:00:10.546105   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:00:10.546146   71183 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 19:00:10.546186   71183 buildroot.go:174] setting up certificates
	I0910 19:00:10.546197   71183 provision.go:84] configureAuth start
	I0910 19:00:10.546214   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.546485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.549236   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549567   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.549594   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549696   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.551807   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552162   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.552195   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552326   71183 provision.go:143] copyHostCerts
	I0910 19:00:10.552370   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 19:00:10.552380   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 19:00:10.552435   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 19:00:10.552559   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 19:00:10.552568   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 19:00:10.552588   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 19:00:10.552646   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 19:00:10.552653   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 19:00:10.552669   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 19:00:10.552714   71183 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.embed-certs-836868 san=[127.0.0.1 192.168.39.107 embed-certs-836868 localhost minikube]
	I0910 19:00:10.610073   71183 provision.go:177] copyRemoteCerts
	I0910 19:00:10.610132   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:00:10.610153   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.612881   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613264   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.613301   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.613695   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.613863   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.613980   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:10.695479   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 19:00:10.719380   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0910 19:00:10.744099   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:00:10.767849   71183 provision.go:87] duration metric: took 221.638443ms to configureAuth
	I0910 19:00:10.767873   71183 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:00:10.768065   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:10.768150   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.770831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.771178   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771338   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.771539   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771702   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771825   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.771952   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.772106   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.772120   71183 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 19:00:10.992528   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 19:00:10.992568   71183 machine.go:96] duration metric: took 786.857321ms to provisionDockerMachine
	I0910 19:00:10.992583   71183 start.go:293] postStartSetup for "embed-certs-836868" (driver="kvm2")
	I0910 19:00:10.992598   71183 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:00:10.992630   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.992999   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:00:10.993030   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.995361   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995745   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.995777   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995925   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.996100   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.996212   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.996375   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.079205   71183 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:00:11.083998   71183 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:00:11.084028   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 19:00:11.084089   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 19:00:11.084158   71183 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 19:00:11.084241   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:00:11.093150   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:11.116894   71183 start.go:296] duration metric: took 124.294668ms for postStartSetup
	I0910 19:00:11.116938   71183 fix.go:56] duration metric: took 19.934731446s for fixHost
	I0910 19:00:11.116962   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.119482   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119784   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.119821   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.120176   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120331   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120501   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.120645   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:11.120868   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:11.120883   71183 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:00:11.217542   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994811.172877822
	
	I0910 19:00:11.217570   71183 fix.go:216] guest clock: 1725994811.172877822
	I0910 19:00:11.217577   71183 fix.go:229] Guest: 2024-09-10 19:00:11.172877822 +0000 UTC Remote: 2024-09-10 19:00:11.116943488 +0000 UTC m=+358.948412200 (delta=55.934334ms)
	I0910 19:00:11.217603   71183 fix.go:200] guest clock delta is within tolerance: 55.934334ms
	I0910 19:00:11.217607   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 20.035440196s
	I0910 19:00:11.217627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.217861   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:11.220855   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221282   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.221313   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221533   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222074   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222277   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222354   71183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 19:00:11.222402   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.222528   71183 ssh_runner.go:195] Run: cat /version.json
	I0910 19:00:11.222570   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.225205   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.225565   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225581   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225753   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.225934   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226035   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.226062   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.226109   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226207   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.226283   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.226370   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226535   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226668   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.297642   71183 ssh_runner.go:195] Run: systemctl --version
	I0910 19:00:11.322486   71183 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 19:00:11.470402   71183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 19:00:11.477843   71183 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:00:11.477903   71183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:00:11.495518   71183 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:00:11.495542   71183 start.go:495] detecting cgroup driver to use...
	I0910 19:00:11.495597   71183 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:00:11.512467   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:00:11.526665   71183 docker.go:217] disabling cri-docker service (if available) ...
	I0910 19:00:11.526732   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 19:00:11.540445   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 19:00:11.554386   71183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 19:00:11.682012   71183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 19:00:11.846239   71183 docker.go:233] disabling docker service ...
	I0910 19:00:11.846303   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 19:00:11.860981   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 19:00:11.874271   71183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 19:00:12.005716   71183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 19:00:12.137151   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 19:00:12.151156   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:00:12.170086   71183 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 19:00:12.170150   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.180741   71183 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 19:00:12.180804   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.190933   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.200885   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:07.840772   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.341153   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.840737   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.340471   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.840262   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.340827   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.840645   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.340524   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.840521   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.340560   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.210950   71183 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:00:12.221730   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.232931   71183 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.251318   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.261473   71183 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:00:12.270818   71183 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 19:00:12.270873   71183 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 19:00:12.284581   71183 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:00:12.294214   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:12.424646   71183 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 19:00:12.517553   71183 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 19:00:12.517633   71183 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 19:00:12.522728   71183 start.go:563] Will wait 60s for crictl version
	I0910 19:00:12.522775   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:00:12.526754   71183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:00:12.569377   71183 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 19:00:12.569454   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.597783   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.632619   71183 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 19:00:12.725298   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:15.223906   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:14.035868   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:16.534058   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.633800   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:12.637104   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637447   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:12.637476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637684   71183 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 19:00:12.641996   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:12.654577   71183 kubeadm.go:883] updating cluster {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:00:12.654684   71183 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:00:12.654737   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:12.694585   71183 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 19:00:12.694644   71183 ssh_runner.go:195] Run: which lz4
	I0910 19:00:12.699764   71183 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 19:00:12.705406   71183 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:00:12.705437   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 19:00:14.054131   71183 crio.go:462] duration metric: took 1.354391682s to copy over tarball
	I0910 19:00:14.054206   71183 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 19:00:16.114941   71183 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.06070257s)
	I0910 19:00:16.114968   71183 crio.go:469] duration metric: took 2.060808083s to extract the tarball
	I0910 19:00:16.114978   71183 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 19:00:16.153934   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:16.199988   71183 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 19:00:16.200008   71183 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:00:16.200015   71183 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.0 crio true true} ...
	I0910 19:00:16.200109   71183 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-836868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:00:16.200168   71183 ssh_runner.go:195] Run: crio config
	I0910 19:00:16.249409   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:16.249430   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:16.249443   71183 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 19:00:16.249462   71183 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-836868 NodeName:embed-certs-836868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:00:16.249596   71183 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-836868"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:00:16.249652   71183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:00:16.265984   71183 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:00:16.266062   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:00:16.276007   71183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0910 19:00:16.291971   71183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:00:16.307712   71183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0910 19:00:16.323789   71183 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0910 19:00:16.327478   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:16.339545   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:16.470249   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:16.487798   71183 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868 for IP: 192.168.39.107
	I0910 19:00:16.487838   71183 certs.go:194] generating shared ca certs ...
	I0910 19:00:16.487858   71183 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:16.488058   71183 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 19:00:16.488110   71183 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 19:00:16.488124   71183 certs.go:256] generating profile certs ...
	I0910 19:00:16.488243   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/client.key
	I0910 19:00:16.488307   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key.04acd22a
	I0910 19:00:16.488355   71183 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key
	I0910 19:00:16.488507   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 19:00:16.488547   71183 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 19:00:16.488560   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 19:00:16.488593   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 19:00:16.488633   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 19:00:16.488669   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 19:00:16.488856   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:16.489528   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:00:16.529980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 19:00:16.568653   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:00:16.593924   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 19:00:16.628058   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0910 19:00:16.669209   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:00:16.693274   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:00:16.716323   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:00:16.740155   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:00:16.763908   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 19:00:16.787980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 19:00:16.811754   71183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:00:16.828151   71183 ssh_runner.go:195] Run: openssl version
	I0910 19:00:16.834095   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:00:16.845376   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850178   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850230   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.856507   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:00:16.868105   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 19:00:16.879950   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884778   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884823   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.890715   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 19:00:16.903523   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 19:00:16.914585   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919105   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919151   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.924965   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:00:16.935579   71183 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:00:16.939895   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:00:16.945595   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:00:16.951247   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:00:16.956938   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:00:16.962908   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:00:16.968664   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:00:16.974624   71183 kubeadm.go:392] StartCluster: {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:00:16.974725   71183 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 19:00:16.974778   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.012869   71183 cri.go:89] found id: ""
	I0910 19:00:17.012947   71183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:00:17.023781   71183 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 19:00:17.023798   71183 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 19:00:17.023846   71183 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 19:00:17.034549   71183 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:00:17.035566   71183 kubeconfig.go:125] found "embed-certs-836868" server: "https://192.168.39.107:8443"
	I0910 19:00:17.037751   71183 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:00:17.047667   71183 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.107
	I0910 19:00:17.047696   71183 kubeadm.go:1160] stopping kube-system containers ...
	I0910 19:00:17.047708   71183 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 19:00:17.047747   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.083130   71183 cri.go:89] found id: ""
	I0910 19:00:17.083200   71183 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 19:00:17.101035   71183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:00:17.111335   71183 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:00:17.111357   71183 kubeadm.go:157] found existing configuration files:
	
	I0910 19:00:17.111414   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:00:17.120543   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:00:17.120593   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:00:17.130938   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:00:17.140688   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:00:17.140747   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:00:17.150637   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.160483   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:00:17.160520   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.170417   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:00:17.179778   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:00:17.179827   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:00:17.189197   71183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:00:17.199264   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:12.841060   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.340347   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.841136   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.840913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.341205   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.840692   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.340839   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.841050   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.341340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.224985   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:19.231248   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:18.534658   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:20.534807   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:17.309791   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.257162   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.482216   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.555094   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.645089   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:18.645178   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.146266   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.645546   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.146275   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.645291   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.662158   71183 api_server.go:72] duration metric: took 2.017082575s to wait for apiserver process to appear ...
	I0910 19:00:20.662183   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:00:20.662204   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:17.840510   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.340821   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.841156   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.340316   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.840339   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.341140   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.841333   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.340342   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.840282   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:22.340361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.326005   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.326036   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.326048   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.346004   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.346035   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.662353   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.669314   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:23.669344   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.162975   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.170262   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:24.170298   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.662865   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.667320   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:00:24.674393   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:00:24.674418   71183 api_server.go:131] duration metric: took 4.01222766s to wait for apiserver health ...
	I0910 19:00:24.674427   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:24.674433   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:24.676229   71183 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:00:24.677519   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:00:24.692951   71183 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:00:24.718355   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:00:24.732731   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:00:24.732758   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:00:24.732764   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:00:24.732775   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:00:24.732781   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:00:24.732798   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 19:00:24.732808   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:00:24.732817   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:00:24.732823   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:00:24.732835   71183 system_pods.go:74] duration metric: took 14.459216ms to wait for pod list to return data ...
	I0910 19:00:24.732846   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:00:24.742472   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:00:24.742497   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:00:24.742507   71183 node_conditions.go:105] duration metric: took 9.657853ms to run NodePressure ...
	I0910 19:00:24.742523   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:25.021719   71183 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026163   71183 kubeadm.go:739] kubelet initialised
	I0910 19:00:25.026187   71183 kubeadm.go:740] duration metric: took 4.442058ms waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026196   71183 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:25.030895   71183 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.035021   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035044   71183 pod_ready.go:82] duration metric: took 4.12756ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.035055   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035064   71183 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.039362   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039381   71183 pod_ready.go:82] duration metric: took 4.309293ms for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.039389   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039394   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.049142   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049164   71183 pod_ready.go:82] duration metric: took 9.762471ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.049175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049182   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.122255   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122285   71183 pod_ready.go:82] duration metric: took 73.09407ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.122295   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122301   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.522122   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522160   71183 pod_ready.go:82] duration metric: took 399.850787ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.522175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522185   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.921918   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921947   71183 pod_ready.go:82] duration metric: took 399.75274ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.921956   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921962   71183 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:26.322195   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322219   71183 pod_ready.go:82] duration metric: took 400.248825ms for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:26.322228   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322235   71183 pod_ready.go:39] duration metric: took 1.296028669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:26.322251   71183 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:00:26.333796   71183 ops.go:34] apiserver oom_adj: -16
	I0910 19:00:26.333824   71183 kubeadm.go:597] duration metric: took 9.310018521s to restartPrimaryControlPlane
	I0910 19:00:26.333834   71183 kubeadm.go:394] duration metric: took 9.359219145s to StartCluster
	I0910 19:00:26.333850   71183 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.333920   71183 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:00:26.336496   71183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.336792   71183 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:00:26.336863   71183 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:00:26.336935   71183 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-836868"
	I0910 19:00:26.336969   71183 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-836868"
	W0910 19:00:26.336980   71183 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:00:26.336995   71183 addons.go:69] Setting default-storageclass=true in profile "embed-certs-836868"
	I0910 19:00:26.337050   71183 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-836868"
	I0910 19:00:26.337058   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:26.337050   71183 addons.go:69] Setting metrics-server=true in profile "embed-certs-836868"
	I0910 19:00:26.337011   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337146   71183 addons.go:234] Setting addon metrics-server=true in "embed-certs-836868"
	W0910 19:00:26.337165   71183 addons.go:243] addon metrics-server should already be in state true
	I0910 19:00:26.337234   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337501   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337547   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337552   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337583   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337638   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337677   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.339741   71183 out.go:177] * Verifying Kubernetes components...
	I0910 19:00:26.341792   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:26.354154   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0910 19:00:26.354750   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.355345   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.355379   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.355756   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.356316   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0910 19:00:26.356389   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.356428   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.356508   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35641
	I0910 19:00:26.356810   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.356893   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.357384   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.357411   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361164   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.361278   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.361302   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361363   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.361709   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.362446   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.362483   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.364762   71183 addons.go:234] Setting addon default-storageclass=true in "embed-certs-836868"
	W0910 19:00:26.364786   71183 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:00:26.364814   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.365165   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.365230   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.379158   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0910 19:00:26.379696   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.380235   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.380266   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.380654   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.380865   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.382030   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0910 19:00:26.382358   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.382892   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.382912   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.382928   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.383271   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.383441   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.385129   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.385171   71183 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:00:26.385687   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0910 19:00:26.386001   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.386217   71183 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:00:21.723833   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.724422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.724456   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.034262   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.035125   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:26.386227   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:00:26.386289   71183 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:00:26.386309   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.386518   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.386533   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.386931   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.387566   71183 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.387651   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:00:26.387672   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.387618   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.387760   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.389782   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.389941   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.390190   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.390263   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.390558   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.390744   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.390921   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.391058   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.391585   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391788   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.391941   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.392097   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.392256   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.404601   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0910 19:00:26.405167   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.406097   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.406655   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.407006   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.407163   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.409223   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.409437   71183 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.409454   71183 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:00:26.409470   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.412388   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.412812   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.412831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.413010   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.413177   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.413333   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.413474   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.533906   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:26.552203   71183 node_ready.go:35] waiting up to 6m0s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:26.687774   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:00:26.687804   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:00:26.690124   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.737647   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:00:26.737673   71183 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:00:26.739650   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.783096   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:26.783125   71183 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:00:26.828766   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:22.841048   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.341180   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.841325   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.340485   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.841340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.340935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.840886   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.340826   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.840344   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.341189   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.844896   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154733205s)
	I0910 19:00:27.844931   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105250764s)
	I0910 19:00:27.844944   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844969   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844979   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.844980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845406   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845420   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845434   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845446   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.845464   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.845471   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845702   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845733   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845747   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847084   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847101   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847110   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.847118   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.847308   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847323   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.852938   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.852956   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.853198   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.853219   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.853224   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.879527   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.05071539s)
	I0910 19:00:27.879577   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.879597   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880030   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880050   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880059   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.880081   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880381   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880405   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880416   71183 addons.go:475] Verifying addon metrics-server=true in "embed-certs-836868"
	I0910 19:00:27.880383   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.883034   71183 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:00:28.222881   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.223636   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.034633   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.884243   71183 addons.go:510] duration metric: took 1.547392632s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:00:28.556786   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:31.055519   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:27.840306   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.340657   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.841179   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.340881   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.840957   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.341260   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.841151   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.840360   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.341199   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.724435   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:35.223194   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.533611   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:34.534941   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.034007   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:33.056381   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:34.056156   71183 node_ready.go:49] node "embed-certs-836868" has status "Ready":"True"
	I0910 19:00:34.056191   71183 node_ready.go:38] duration metric: took 7.503955102s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:34.056200   71183 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:34.063331   71183 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068294   71183 pod_ready.go:93] pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:34.068322   71183 pod_ready.go:82] duration metric: took 4.96275ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068335   71183 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:36.077798   71183 pod_ready.go:103] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.841192   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.340518   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.840995   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.341016   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.840480   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.340647   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.840416   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.340921   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.340956   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.224065   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.723852   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.533725   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.534430   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.576189   71183 pod_ready.go:93] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.576218   71183 pod_ready.go:82] duration metric: took 3.507872898s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.576238   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582150   71183 pod_ready.go:93] pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.582167   71183 pod_ready.go:82] duration metric: took 5.921544ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582175   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586941   71183 pod_ready.go:93] pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.586956   71183 pod_ready.go:82] duration metric: took 4.774648ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586963   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591829   71183 pod_ready.go:93] pod "kube-proxy-4fddv" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.591846   71183 pod_ready.go:82] duration metric: took 4.876938ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591854   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657930   71183 pod_ready.go:93] pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.657952   71183 pod_ready.go:82] duration metric: took 66.092785ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657962   71183 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:39.665465   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.841210   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.341302   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.340558   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.840395   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.341022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.841093   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.341228   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.841103   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.340329   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.223446   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.223533   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.224840   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.033565   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.034402   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.164336   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.164983   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:42.841000   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.341147   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.840534   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.340988   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.340859   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.840877   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.841175   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:47.341064   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.722930   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.723539   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.036816   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.534367   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.667433   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:51.164114   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:47.841037   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.341204   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.840961   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.340679   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.841173   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.340751   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.841158   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.340999   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.840349   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.340383   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.723945   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.224168   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.034234   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.533690   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.164294   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.666369   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:52.840991   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.340439   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.840487   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.340407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.840619   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.340844   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.841190   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.340927   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.724247   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.223715   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:58.033639   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.034297   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.670234   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.164278   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.164755   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.840798   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.340905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.841330   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.340743   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.840256   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.340970   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.840732   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:01.340927   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:01.341014   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:01.378922   72122 cri.go:89] found id: ""
	I0910 19:01:01.378953   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.378964   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:01.378971   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:01.379032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:01.413274   72122 cri.go:89] found id: ""
	I0910 19:01:01.413302   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.413313   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:01.413320   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:01.413383   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:01.449165   72122 cri.go:89] found id: ""
	I0910 19:01:01.449204   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.449215   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:01.449221   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:01.449291   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:01.484627   72122 cri.go:89] found id: ""
	I0910 19:01:01.484650   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.484657   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:01.484663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:01.484720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:01.519332   72122 cri.go:89] found id: ""
	I0910 19:01:01.519357   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.519364   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:01.519370   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:01.519424   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:01.554080   72122 cri.go:89] found id: ""
	I0910 19:01:01.554102   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.554109   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:01.554114   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:01.554160   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:01.590100   72122 cri.go:89] found id: ""
	I0910 19:01:01.590131   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.590143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:01.590149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:01.590208   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:01.623007   72122 cri.go:89] found id: ""
	I0910 19:01:01.623034   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.623045   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:01.623055   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:01.623070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:01.679940   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:01.679971   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:01.694183   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:01.694218   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:01.826997   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:01.827025   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:01.827038   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:01.903885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:01.903926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:02.224039   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.224422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.533395   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:05.034075   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.665680   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:06.665874   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.450792   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:04.471427   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:04.471501   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:04.521450   72122 cri.go:89] found id: ""
	I0910 19:01:04.521484   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.521494   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:04.521503   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:04.521562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:04.577588   72122 cri.go:89] found id: ""
	I0910 19:01:04.577622   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.577633   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:04.577641   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:04.577707   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:04.615558   72122 cri.go:89] found id: ""
	I0910 19:01:04.615586   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.615594   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:04.615599   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:04.615652   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:04.655763   72122 cri.go:89] found id: ""
	I0910 19:01:04.655793   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.655806   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:04.655815   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:04.655881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:04.692620   72122 cri.go:89] found id: ""
	I0910 19:01:04.692642   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.692649   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:04.692654   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:04.692709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:04.730575   72122 cri.go:89] found id: ""
	I0910 19:01:04.730601   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.730611   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:04.730616   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:04.730665   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:04.766716   72122 cri.go:89] found id: ""
	I0910 19:01:04.766742   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.766749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:04.766754   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:04.766799   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:04.808122   72122 cri.go:89] found id: ""
	I0910 19:01:04.808151   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.808162   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:04.808173   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:04.808185   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:04.858563   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:04.858592   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:04.872323   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:04.872350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:04.942541   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:04.942571   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:04.942588   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:05.022303   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:05.022338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:06.723760   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:08.724550   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.223094   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.533060   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.534466   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:12.034244   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.163526   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.164502   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.562092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:07.575254   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:07.575308   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:07.616583   72122 cri.go:89] found id: ""
	I0910 19:01:07.616607   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.616615   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:07.616620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:07.616676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:07.654676   72122 cri.go:89] found id: ""
	I0910 19:01:07.654700   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.654711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:07.654718   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:07.654790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:07.690054   72122 cri.go:89] found id: ""
	I0910 19:01:07.690085   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.690096   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:07.690104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:07.690171   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:07.724273   72122 cri.go:89] found id: ""
	I0910 19:01:07.724295   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.724302   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:07.724307   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:07.724363   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:07.757621   72122 cri.go:89] found id: ""
	I0910 19:01:07.757646   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.757654   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:07.757660   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:07.757716   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:07.791502   72122 cri.go:89] found id: ""
	I0910 19:01:07.791533   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.791543   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:07.791557   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:07.791620   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:07.825542   72122 cri.go:89] found id: ""
	I0910 19:01:07.825577   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.825586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:07.825592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:07.825649   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:07.862278   72122 cri.go:89] found id: ""
	I0910 19:01:07.862303   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.862312   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:07.862320   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:07.862331   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:07.952016   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:07.952059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:07.997004   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:07.997034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:08.047745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:08.047783   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:08.064712   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:08.064736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:08.136822   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:10.637017   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:10.650113   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:10.650198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:10.687477   72122 cri.go:89] found id: ""
	I0910 19:01:10.687504   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.687513   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:10.687520   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:10.687594   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:10.721410   72122 cri.go:89] found id: ""
	I0910 19:01:10.721437   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.721447   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:10.721455   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:10.721514   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:10.757303   72122 cri.go:89] found id: ""
	I0910 19:01:10.757330   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.757338   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:10.757343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:10.757396   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:10.794761   72122 cri.go:89] found id: ""
	I0910 19:01:10.794788   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.794799   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:10.794806   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:10.794885   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:10.828631   72122 cri.go:89] found id: ""
	I0910 19:01:10.828657   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.828668   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:10.828675   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:10.828737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:10.863609   72122 cri.go:89] found id: ""
	I0910 19:01:10.863634   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.863641   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:10.863646   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:10.863734   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:10.899299   72122 cri.go:89] found id: ""
	I0910 19:01:10.899324   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.899335   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:10.899342   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:10.899403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:10.939233   72122 cri.go:89] found id: ""
	I0910 19:01:10.939259   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.939268   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:10.939277   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:10.939290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:10.976599   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:10.976627   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:11.029099   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:11.029144   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:11.045401   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:11.045426   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:11.119658   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:11.119679   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:11.119696   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:13.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.723673   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:14.034325   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:16.534463   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.663847   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.664387   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.698696   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:13.712317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:13.712386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:13.747442   72122 cri.go:89] found id: ""
	I0910 19:01:13.747470   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.747480   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:13.747487   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:13.747555   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:13.782984   72122 cri.go:89] found id: ""
	I0910 19:01:13.783008   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.783015   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:13.783021   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:13.783078   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:13.820221   72122 cri.go:89] found id: ""
	I0910 19:01:13.820245   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.820256   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:13.820262   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:13.820322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:13.854021   72122 cri.go:89] found id: ""
	I0910 19:01:13.854056   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.854068   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:13.854075   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:13.854138   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:13.888292   72122 cri.go:89] found id: ""
	I0910 19:01:13.888321   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.888331   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:13.888338   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:13.888398   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:13.922301   72122 cri.go:89] found id: ""
	I0910 19:01:13.922330   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.922341   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:13.922349   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:13.922408   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:13.959977   72122 cri.go:89] found id: ""
	I0910 19:01:13.960002   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.960010   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:13.960015   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:13.960074   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:13.995255   72122 cri.go:89] found id: ""
	I0910 19:01:13.995282   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.995293   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:13.995308   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:13.995323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:14.050760   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:14.050790   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:14.064694   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:14.064723   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:14.137406   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:14.137431   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:14.137447   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:14.216624   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:14.216657   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:16.765643   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:16.778746   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:16.778821   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:16.814967   72122 cri.go:89] found id: ""
	I0910 19:01:16.814999   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.815010   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:16.815017   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:16.815073   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:16.850306   72122 cri.go:89] found id: ""
	I0910 19:01:16.850334   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.850345   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:16.850352   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:16.850413   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:16.886104   72122 cri.go:89] found id: ""
	I0910 19:01:16.886134   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.886144   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:16.886152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:16.886218   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:16.921940   72122 cri.go:89] found id: ""
	I0910 19:01:16.921968   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.921977   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:16.921983   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:16.922032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:16.956132   72122 cri.go:89] found id: ""
	I0910 19:01:16.956166   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.956177   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:16.956185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:16.956247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:16.988240   72122 cri.go:89] found id: ""
	I0910 19:01:16.988269   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.988278   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:16.988284   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:16.988330   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:17.022252   72122 cri.go:89] found id: ""
	I0910 19:01:17.022281   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.022291   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:17.022297   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:17.022364   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:17.058664   72122 cri.go:89] found id: ""
	I0910 19:01:17.058693   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.058703   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:17.058715   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:17.058740   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:17.136927   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:17.136964   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:17.189427   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:17.189457   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:17.242193   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:17.242225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:17.257878   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:17.257908   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:17.330096   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:17.724465   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.224230   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:18.534806   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:21.034368   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:17.667897   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.165174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.165421   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:19.831030   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:19.844516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:19.844581   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:19.879878   72122 cri.go:89] found id: ""
	I0910 19:01:19.879908   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.879919   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:19.879927   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:19.879988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:19.915992   72122 cri.go:89] found id: ""
	I0910 19:01:19.916018   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.916025   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:19.916030   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:19.916084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:19.949206   72122 cri.go:89] found id: ""
	I0910 19:01:19.949232   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.949242   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:19.949249   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:19.949311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:19.983011   72122 cri.go:89] found id: ""
	I0910 19:01:19.983035   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.983043   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:19.983048   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:19.983096   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:20.018372   72122 cri.go:89] found id: ""
	I0910 19:01:20.018394   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.018402   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:20.018408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:20.018466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:20.053941   72122 cri.go:89] found id: ""
	I0910 19:01:20.053967   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.053975   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:20.053980   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:20.054037   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:20.084999   72122 cri.go:89] found id: ""
	I0910 19:01:20.085026   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.085035   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:20.085042   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:20.085115   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:20.124036   72122 cri.go:89] found id: ""
	I0910 19:01:20.124063   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.124072   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:20.124086   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:20.124103   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:20.176917   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:20.176944   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:20.190831   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:20.190852   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:20.257921   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:20.257942   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:20.257954   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:20.335320   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:20.335350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:22.723788   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.223765   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:23.034456   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.534821   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:24.663208   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:26.664282   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.875167   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:22.888803   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:22.888858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:22.922224   72122 cri.go:89] found id: ""
	I0910 19:01:22.922252   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.922264   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:22.922270   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:22.922328   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:22.959502   72122 cri.go:89] found id: ""
	I0910 19:01:22.959536   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.959546   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:22.959553   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:22.959619   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:22.992914   72122 cri.go:89] found id: ""
	I0910 19:01:22.992944   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.992955   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:22.992962   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:22.993022   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:23.028342   72122 cri.go:89] found id: ""
	I0910 19:01:23.028367   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.028376   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:23.028384   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:23.028443   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:23.064715   72122 cri.go:89] found id: ""
	I0910 19:01:23.064742   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.064753   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:23.064761   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:23.064819   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:23.100752   72122 cri.go:89] found id: ""
	I0910 19:01:23.100781   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.100789   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:23.100795   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:23.100857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:23.136017   72122 cri.go:89] found id: ""
	I0910 19:01:23.136045   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.136055   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:23.136062   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:23.136108   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:23.170787   72122 cri.go:89] found id: ""
	I0910 19:01:23.170811   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.170819   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:23.170826   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:23.170840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:23.210031   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:23.210059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:23.261525   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:23.261557   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:23.275611   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:23.275636   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:23.348543   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:23.348568   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:23.348582   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:25.929406   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:25.942658   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:25.942737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:25.977231   72122 cri.go:89] found id: ""
	I0910 19:01:25.977260   72122 logs.go:276] 0 containers: []
	W0910 19:01:25.977270   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:25.977277   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:25.977336   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:26.015060   72122 cri.go:89] found id: ""
	I0910 19:01:26.015093   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.015103   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:26.015110   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:26.015180   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:26.053618   72122 cri.go:89] found id: ""
	I0910 19:01:26.053643   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.053651   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:26.053656   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:26.053712   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:26.090489   72122 cri.go:89] found id: ""
	I0910 19:01:26.090515   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.090523   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:26.090529   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:26.090587   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:26.126687   72122 cri.go:89] found id: ""
	I0910 19:01:26.126710   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.126718   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:26.126723   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:26.126771   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:26.160901   72122 cri.go:89] found id: ""
	I0910 19:01:26.160939   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.160951   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:26.160959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:26.161017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:26.195703   72122 cri.go:89] found id: ""
	I0910 19:01:26.195728   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.195737   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:26.195743   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:26.195794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:26.230394   72122 cri.go:89] found id: ""
	I0910 19:01:26.230414   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.230422   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:26.230430   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:26.230444   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:26.296884   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:26.296905   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:26.296926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:26.371536   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:26.371569   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:26.412926   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:26.412958   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:26.462521   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:26.462550   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:27.725957   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.224312   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.034338   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.034794   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.035284   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.668205   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:31.166271   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.976550   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:28.989517   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:28.989586   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:29.025638   72122 cri.go:89] found id: ""
	I0910 19:01:29.025662   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.025671   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:29.025677   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:29.025719   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:29.067473   72122 cri.go:89] found id: ""
	I0910 19:01:29.067495   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.067502   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:29.067507   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:29.067556   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:29.105587   72122 cri.go:89] found id: ""
	I0910 19:01:29.105616   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.105628   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:29.105635   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:29.105696   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:29.142427   72122 cri.go:89] found id: ""
	I0910 19:01:29.142458   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.142468   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:29.142474   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:29.142530   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:29.178553   72122 cri.go:89] found id: ""
	I0910 19:01:29.178575   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.178582   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:29.178587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:29.178638   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:29.212997   72122 cri.go:89] found id: ""
	I0910 19:01:29.213025   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.213034   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:29.213040   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:29.213109   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:29.247057   72122 cri.go:89] found id: ""
	I0910 19:01:29.247083   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.247091   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:29.247097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:29.247151   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:29.285042   72122 cri.go:89] found id: ""
	I0910 19:01:29.285084   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.285096   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:29.285107   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:29.285131   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:29.336003   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:29.336033   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:29.349867   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:29.349890   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:29.422006   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:29.422028   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:29.422043   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:29.504047   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:29.504079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:32.050723   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:32.063851   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:32.063904   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:32.100816   72122 cri.go:89] found id: ""
	I0910 19:01:32.100841   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.100851   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:32.100858   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:32.100924   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:32.134863   72122 cri.go:89] found id: ""
	I0910 19:01:32.134892   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.134902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:32.134909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:32.134967   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:32.169873   72122 cri.go:89] found id: ""
	I0910 19:01:32.169901   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.169912   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:32.169919   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:32.169973   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:32.202161   72122 cri.go:89] found id: ""
	I0910 19:01:32.202187   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.202197   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:32.202204   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:32.202264   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:32.236850   72122 cri.go:89] found id: ""
	I0910 19:01:32.236879   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.236888   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:32.236896   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:32.236957   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:32.271479   72122 cri.go:89] found id: ""
	I0910 19:01:32.271511   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.271530   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:32.271542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:32.271701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:32.306724   72122 cri.go:89] found id: ""
	I0910 19:01:32.306747   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.306754   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:32.306760   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:32.306811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:32.341153   72122 cri.go:89] found id: ""
	I0910 19:01:32.341184   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.341195   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:32.341206   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:32.341221   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:32.393087   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:32.393122   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:32.406565   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:32.406591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:32.478030   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:32.478048   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:32.478079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:32.224371   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.723372   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.533510   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:37.033933   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:33.671725   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:36.165396   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.568440   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:32.568478   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:35.112022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:35.125210   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:35.125286   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:35.160716   72122 cri.go:89] found id: ""
	I0910 19:01:35.160743   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.160753   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:35.160759   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:35.160817   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:35.196500   72122 cri.go:89] found id: ""
	I0910 19:01:35.196530   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.196541   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:35.196548   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:35.196622   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:35.232476   72122 cri.go:89] found id: ""
	I0910 19:01:35.232510   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.232521   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:35.232528   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:35.232590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:35.269612   72122 cri.go:89] found id: ""
	I0910 19:01:35.269635   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.269644   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:35.269649   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:35.269697   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:35.307368   72122 cri.go:89] found id: ""
	I0910 19:01:35.307393   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.307401   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:35.307408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:35.307475   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:35.342079   72122 cri.go:89] found id: ""
	I0910 19:01:35.342108   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.342119   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:35.342126   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:35.342188   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:35.379732   72122 cri.go:89] found id: ""
	I0910 19:01:35.379761   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.379771   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:35.379778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:35.379840   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:35.419067   72122 cri.go:89] found id: ""
	I0910 19:01:35.419098   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.419109   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:35.419120   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:35.419139   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:35.472459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:35.472494   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:35.487044   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:35.487078   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:35.565242   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:35.565264   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:35.565282   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:35.645918   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:35.645951   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:36.724330   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.724368   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.224272   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:39.533968   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.534579   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.666059   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.164158   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.189238   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:38.203973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:38.204035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:38.241402   72122 cri.go:89] found id: ""
	I0910 19:01:38.241428   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.241438   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:38.241446   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:38.241506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:38.280657   72122 cri.go:89] found id: ""
	I0910 19:01:38.280685   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.280693   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:38.280698   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:38.280753   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:38.319697   72122 cri.go:89] found id: ""
	I0910 19:01:38.319725   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.319735   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:38.319742   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:38.319804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:38.356766   72122 cri.go:89] found id: ""
	I0910 19:01:38.356799   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.356810   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:38.356817   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:38.356876   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:38.395468   72122 cri.go:89] found id: ""
	I0910 19:01:38.395497   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.395508   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:38.395516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:38.395577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:38.434942   72122 cri.go:89] found id: ""
	I0910 19:01:38.434965   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.434974   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:38.434979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:38.435025   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:38.470687   72122 cri.go:89] found id: ""
	I0910 19:01:38.470715   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.470724   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:38.470729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:38.470777   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:38.505363   72122 cri.go:89] found id: ""
	I0910 19:01:38.505394   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.505405   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:38.505417   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:38.505432   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:38.557735   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:38.557770   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:38.586094   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:38.586128   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:38.665190   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:38.665215   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:38.665231   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:38.743748   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:38.743779   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:41.284310   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:41.299086   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:41.299157   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:41.340453   72122 cri.go:89] found id: ""
	I0910 19:01:41.340476   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.340484   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:41.340489   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:41.340544   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:41.374028   72122 cri.go:89] found id: ""
	I0910 19:01:41.374052   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.374060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:41.374066   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:41.374117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:41.413888   72122 cri.go:89] found id: ""
	I0910 19:01:41.413915   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.413929   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:41.413935   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:41.413994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:41.450846   72122 cri.go:89] found id: ""
	I0910 19:01:41.450873   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.450883   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:41.450890   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:41.450950   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:41.484080   72122 cri.go:89] found id: ""
	I0910 19:01:41.484107   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.484115   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:41.484120   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:41.484168   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:41.523652   72122 cri.go:89] found id: ""
	I0910 19:01:41.523677   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.523685   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:41.523690   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:41.523749   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:41.563690   72122 cri.go:89] found id: ""
	I0910 19:01:41.563715   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.563727   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:41.563734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:41.563797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:41.602101   72122 cri.go:89] found id: ""
	I0910 19:01:41.602122   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.602130   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:41.602137   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:41.602152   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:41.655459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:41.655488   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:41.670037   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:41.670062   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:41.741399   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:41.741417   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:41.741428   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:41.817411   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:41.817445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:43.726285   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.223867   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.034404   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.533246   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:43.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.164675   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.363631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:44.378279   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:44.378344   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:44.412450   72122 cri.go:89] found id: ""
	I0910 19:01:44.412486   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.412495   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:44.412502   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:44.412569   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:44.448378   72122 cri.go:89] found id: ""
	I0910 19:01:44.448407   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.448415   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:44.448420   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:44.448470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:44.483478   72122 cri.go:89] found id: ""
	I0910 19:01:44.483516   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.483524   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:44.483530   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:44.483584   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:44.517787   72122 cri.go:89] found id: ""
	I0910 19:01:44.517812   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.517822   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:44.517828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:44.517886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:44.554909   72122 cri.go:89] found id: ""
	I0910 19:01:44.554939   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.554950   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:44.554957   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:44.555018   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:44.589865   72122 cri.go:89] found id: ""
	I0910 19:01:44.589890   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.589909   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:44.589923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:44.589968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:44.626712   72122 cri.go:89] found id: ""
	I0910 19:01:44.626739   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.626749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:44.626756   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:44.626815   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:44.664985   72122 cri.go:89] found id: ""
	I0910 19:01:44.665067   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.665103   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:44.665114   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:44.665165   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:44.721160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:44.721196   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:44.735339   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:44.735366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:44.810056   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:44.810080   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:44.810094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:44.898822   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:44.898871   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:47.438440   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:47.451438   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:47.451506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:48.723661   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.723768   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.534671   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:51.033397   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.164739   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.665165   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:47.491703   72122 cri.go:89] found id: ""
	I0910 19:01:47.491729   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.491740   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:47.491747   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:47.491811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:47.526834   72122 cri.go:89] found id: ""
	I0910 19:01:47.526862   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.526874   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:47.526880   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:47.526940   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:47.570463   72122 cri.go:89] found id: ""
	I0910 19:01:47.570488   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.570496   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:47.570503   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:47.570545   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:47.608691   72122 cri.go:89] found id: ""
	I0910 19:01:47.608715   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.608727   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:47.608734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:47.608780   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:47.648279   72122 cri.go:89] found id: ""
	I0910 19:01:47.648308   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.648316   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:47.648324   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:47.648386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:47.684861   72122 cri.go:89] found id: ""
	I0910 19:01:47.684885   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.684892   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:47.684897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:47.684947   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:47.721004   72122 cri.go:89] found id: ""
	I0910 19:01:47.721037   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.721049   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:47.721056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:47.721134   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:47.756154   72122 cri.go:89] found id: ""
	I0910 19:01:47.756181   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.756192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:47.756202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:47.756217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:47.806860   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:47.806889   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:47.822419   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:47.822445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:47.891966   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:47.891986   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:47.892000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:47.978510   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:47.978561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.519264   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:50.533576   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:50.533630   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:50.567574   72122 cri.go:89] found id: ""
	I0910 19:01:50.567601   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.567612   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:50.567619   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:50.567678   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:50.608824   72122 cri.go:89] found id: ""
	I0910 19:01:50.608850   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.608858   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:50.608863   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:50.608939   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:50.644502   72122 cri.go:89] found id: ""
	I0910 19:01:50.644530   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.644538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:50.644544   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:50.644590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:50.682309   72122 cri.go:89] found id: ""
	I0910 19:01:50.682332   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.682340   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:50.682345   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:50.682404   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:50.735372   72122 cri.go:89] found id: ""
	I0910 19:01:50.735398   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.735410   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:50.735418   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:50.735482   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:50.786364   72122 cri.go:89] found id: ""
	I0910 19:01:50.786391   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.786401   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:50.786408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:50.786464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:50.831525   72122 cri.go:89] found id: ""
	I0910 19:01:50.831564   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.831575   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:50.831582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:50.831645   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:50.873457   72122 cri.go:89] found id: ""
	I0910 19:01:50.873482   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.873493   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:50.873503   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:50.873524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:50.956032   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:50.956069   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.996871   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:50.996904   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:51.047799   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:51.047824   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:51.061946   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:51.061970   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:51.136302   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:53.222492   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.223835   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.034478   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.532623   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:52.665991   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.164343   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.636464   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:53.649971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:53.650054   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:53.688172   72122 cri.go:89] found id: ""
	I0910 19:01:53.688201   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.688211   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:53.688217   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:53.688274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:53.725094   72122 cri.go:89] found id: ""
	I0910 19:01:53.725119   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.725128   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:53.725135   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:53.725196   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:53.763866   72122 cri.go:89] found id: ""
	I0910 19:01:53.763893   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.763907   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:53.763914   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:53.763971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:53.797760   72122 cri.go:89] found id: ""
	I0910 19:01:53.797787   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.797798   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:53.797805   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:53.797862   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:53.830305   72122 cri.go:89] found id: ""
	I0910 19:01:53.830332   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.830340   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:53.830346   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:53.830402   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:53.861970   72122 cri.go:89] found id: ""
	I0910 19:01:53.861995   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.862003   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:53.862009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:53.862059   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:53.896577   72122 cri.go:89] found id: ""
	I0910 19:01:53.896600   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.896609   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:53.896614   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:53.896660   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:53.935051   72122 cri.go:89] found id: ""
	I0910 19:01:53.935077   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.935086   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:53.935094   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:53.935105   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:53.950252   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:53.950276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:54.023327   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:54.023346   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:54.023361   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:54.101605   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:54.101643   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:54.142906   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:54.142930   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:56.697701   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:56.717755   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:56.717836   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:56.763564   72122 cri.go:89] found id: ""
	I0910 19:01:56.763594   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.763606   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:56.763613   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:56.763675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:56.815780   72122 cri.go:89] found id: ""
	I0910 19:01:56.815808   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.815816   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:56.815821   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:56.815883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:56.848983   72122 cri.go:89] found id: ""
	I0910 19:01:56.849013   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.849024   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:56.849032   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:56.849100   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:56.880660   72122 cri.go:89] found id: ""
	I0910 19:01:56.880690   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.880702   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:56.880709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:56.880756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:56.922836   72122 cri.go:89] found id: ""
	I0910 19:01:56.922860   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.922867   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:56.922873   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:56.922938   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:56.963474   72122 cri.go:89] found id: ""
	I0910 19:01:56.963505   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.963517   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:56.963524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:56.963585   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:56.996837   72122 cri.go:89] found id: ""
	I0910 19:01:56.996864   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.996872   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:56.996877   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:56.996925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:57.029594   72122 cri.go:89] found id: ""
	I0910 19:01:57.029629   72122 logs.go:276] 0 containers: []
	W0910 19:01:57.029640   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:57.029651   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:57.029664   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:57.083745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:57.083772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:57.099269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:57.099293   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:57.174098   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:57.174118   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:57.174129   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:57.258833   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:57.258869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:57.224384   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.722547   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.533178   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.533798   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.035089   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.665383   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:00.164920   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.800644   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:59.814728   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:59.814805   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:59.854081   72122 cri.go:89] found id: ""
	I0910 19:01:59.854113   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.854124   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:59.854133   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:59.854197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:59.889524   72122 cri.go:89] found id: ""
	I0910 19:01:59.889550   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.889560   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:59.889567   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:59.889626   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:59.925833   72122 cri.go:89] found id: ""
	I0910 19:01:59.925859   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.925866   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:59.925872   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:59.925935   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:59.962538   72122 cri.go:89] found id: ""
	I0910 19:01:59.962575   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.962586   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:59.962593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:59.962650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:59.996994   72122 cri.go:89] found id: ""
	I0910 19:01:59.997025   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.997037   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:59.997045   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:59.997126   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:00.032881   72122 cri.go:89] found id: ""
	I0910 19:02:00.032905   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.032915   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:00.032923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:00.032988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:00.065838   72122 cri.go:89] found id: ""
	I0910 19:02:00.065861   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.065869   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:00.065874   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:00.065927   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:00.099479   72122 cri.go:89] found id: ""
	I0910 19:02:00.099505   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.099516   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:00.099526   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:00.099540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:00.182661   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:00.182689   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:00.223514   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:00.223553   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:00.273695   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:00.273721   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:00.287207   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:00.287233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:00.353975   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:01.724647   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.224071   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.225475   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.534230   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.534474   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.665228   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.667935   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:07.163506   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.854145   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:02.867413   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:02.867484   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:02.904299   72122 cri.go:89] found id: ""
	I0910 19:02:02.904327   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.904335   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:02.904340   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:02.904392   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:02.940981   72122 cri.go:89] found id: ""
	I0910 19:02:02.941010   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.941019   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:02.941024   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:02.941099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:02.980013   72122 cri.go:89] found id: ""
	I0910 19:02:02.980038   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.980046   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:02.980052   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:02.980111   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:03.020041   72122 cri.go:89] found id: ""
	I0910 19:02:03.020071   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.020080   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:03.020087   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:03.020144   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:03.055228   72122 cri.go:89] found id: ""
	I0910 19:02:03.055264   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.055277   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:03.055285   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:03.055347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:03.088696   72122 cri.go:89] found id: ""
	I0910 19:02:03.088722   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.088730   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:03.088736   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:03.088787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:03.124753   72122 cri.go:89] found id: ""
	I0910 19:02:03.124776   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.124785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:03.124792   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:03.124849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:03.157191   72122 cri.go:89] found id: ""
	I0910 19:02:03.157222   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.157230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:03.157238   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:03.157248   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:03.239015   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:03.239044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:03.279323   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:03.279355   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:03.328034   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:03.328067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:03.341591   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:03.341620   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:03.411057   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:05.911503   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:05.924794   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:05.924868   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:05.958827   72122 cri.go:89] found id: ""
	I0910 19:02:05.958852   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.958859   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:05.958865   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:05.958920   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:05.992376   72122 cri.go:89] found id: ""
	I0910 19:02:05.992412   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.992423   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:05.992429   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:05.992485   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:06.028058   72122 cri.go:89] found id: ""
	I0910 19:02:06.028088   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.028098   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:06.028107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:06.028162   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:06.066428   72122 cri.go:89] found id: ""
	I0910 19:02:06.066458   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.066470   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:06.066477   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:06.066533   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:06.102750   72122 cri.go:89] found id: ""
	I0910 19:02:06.102774   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.102782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:06.102787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:06.102841   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:06.137216   72122 cri.go:89] found id: ""
	I0910 19:02:06.137243   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.137254   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:06.137261   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:06.137323   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:06.175227   72122 cri.go:89] found id: ""
	I0910 19:02:06.175251   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.175259   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:06.175265   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:06.175311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:06.210197   72122 cri.go:89] found id: ""
	I0910 19:02:06.210222   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.210230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:06.210238   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:06.210249   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:06.261317   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:06.261353   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:06.275196   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:06.275225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:06.354186   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:06.354205   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:06.354219   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:06.436726   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:06.436763   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:08.723505   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:10.724499   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.035939   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.534648   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.166629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.666941   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:08.979157   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:08.992097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:08.992156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:09.025260   72122 cri.go:89] found id: ""
	I0910 19:02:09.025282   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.025289   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:09.025295   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:09.025360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:09.059139   72122 cri.go:89] found id: ""
	I0910 19:02:09.059166   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.059177   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:09.059186   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:09.059240   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:09.092935   72122 cri.go:89] found id: ""
	I0910 19:02:09.092964   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.092973   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:09.092979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:09.093027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:09.127273   72122 cri.go:89] found id: ""
	I0910 19:02:09.127299   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.127310   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:09.127317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:09.127367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:09.163353   72122 cri.go:89] found id: ""
	I0910 19:02:09.163380   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.163389   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:09.163396   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:09.163453   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:09.195371   72122 cri.go:89] found id: ""
	I0910 19:02:09.195396   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.195407   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:09.195414   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:09.195473   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:09.229338   72122 cri.go:89] found id: ""
	I0910 19:02:09.229361   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.229370   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:09.229376   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:09.229432   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:09.262822   72122 cri.go:89] found id: ""
	I0910 19:02:09.262847   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.262857   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:09.262874   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:09.262891   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:09.330079   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:09.330103   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:09.330119   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:09.408969   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:09.409003   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:09.447666   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:09.447702   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:09.501111   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:09.501141   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.016407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:12.030822   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:12.030905   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:12.069191   72122 cri.go:89] found id: ""
	I0910 19:02:12.069218   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.069229   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:12.069236   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:12.069306   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:12.103687   72122 cri.go:89] found id: ""
	I0910 19:02:12.103726   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.103737   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:12.103862   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:12.103937   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:12.142891   72122 cri.go:89] found id: ""
	I0910 19:02:12.142920   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.142932   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:12.142940   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:12.142998   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:12.178966   72122 cri.go:89] found id: ""
	I0910 19:02:12.178991   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.179002   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:12.179010   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:12.179069   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:12.216070   72122 cri.go:89] found id: ""
	I0910 19:02:12.216093   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.216104   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:12.216112   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:12.216161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:12.251447   72122 cri.go:89] found id: ""
	I0910 19:02:12.251479   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.251492   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:12.251500   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:12.251568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:12.284640   72122 cri.go:89] found id: ""
	I0910 19:02:12.284666   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.284677   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:12.284682   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:12.284743   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:12.319601   72122 cri.go:89] found id: ""
	I0910 19:02:12.319625   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.319632   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:12.319639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:12.319650   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:12.372932   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:12.372965   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.387204   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:12.387228   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:12.459288   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:12.459308   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:12.459323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:13.223679   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:15.224341   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:14.034036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.533341   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:13.667258   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.164610   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:12.549161   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:12.549198   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:15.092557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:15.105391   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:15.105456   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:15.139486   72122 cri.go:89] found id: ""
	I0910 19:02:15.139515   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.139524   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:15.139530   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:15.139591   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:15.173604   72122 cri.go:89] found id: ""
	I0910 19:02:15.173630   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.173641   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:15.173648   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:15.173710   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:15.208464   72122 cri.go:89] found id: ""
	I0910 19:02:15.208492   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.208503   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:15.208510   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:15.208568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:15.247536   72122 cri.go:89] found id: ""
	I0910 19:02:15.247567   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.247579   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:15.247586   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:15.247650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:15.285734   72122 cri.go:89] found id: ""
	I0910 19:02:15.285764   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.285775   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:15.285782   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:15.285858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:15.320755   72122 cri.go:89] found id: ""
	I0910 19:02:15.320782   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.320792   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:15.320798   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:15.320849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:15.357355   72122 cri.go:89] found id: ""
	I0910 19:02:15.357384   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.357395   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:15.357402   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:15.357463   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:15.392105   72122 cri.go:89] found id: ""
	I0910 19:02:15.392130   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.392137   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:15.392149   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:15.392160   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:15.444433   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:15.444465   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:15.458759   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:15.458784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:15.523490   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:15.523507   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:15.523524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:15.607584   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:15.607616   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:17.224472   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:19.723953   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.534545   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.667949   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.669762   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.146611   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:18.160311   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:18.160378   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:18.195072   72122 cri.go:89] found id: ""
	I0910 19:02:18.195099   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.195109   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:18.195127   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:18.195201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:18.230099   72122 cri.go:89] found id: ""
	I0910 19:02:18.230129   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.230138   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:18.230145   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:18.230201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:18.268497   72122 cri.go:89] found id: ""
	I0910 19:02:18.268525   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.268534   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:18.268539   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:18.268599   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:18.304929   72122 cri.go:89] found id: ""
	I0910 19:02:18.304966   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.304978   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:18.304985   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:18.305048   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:18.339805   72122 cri.go:89] found id: ""
	I0910 19:02:18.339839   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.339861   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:18.339868   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:18.339925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:18.378353   72122 cri.go:89] found id: ""
	I0910 19:02:18.378372   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.378381   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:18.378393   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:18.378438   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:18.415175   72122 cri.go:89] found id: ""
	I0910 19:02:18.415195   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.415203   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:18.415208   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:18.415262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:18.450738   72122 cri.go:89] found id: ""
	I0910 19:02:18.450762   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.450769   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:18.450778   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:18.450793   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:18.530943   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:18.530975   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:18.568983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:18.569021   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:18.622301   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:18.622336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:18.635788   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:18.635815   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:18.715729   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.216082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:21.229419   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:21.229488   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:21.265152   72122 cri.go:89] found id: ""
	I0910 19:02:21.265183   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.265193   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:21.265201   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:21.265262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:21.300766   72122 cri.go:89] found id: ""
	I0910 19:02:21.300797   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.300815   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:21.300823   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:21.300883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:21.333416   72122 cri.go:89] found id: ""
	I0910 19:02:21.333443   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.333452   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:21.333460   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:21.333526   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:21.371112   72122 cri.go:89] found id: ""
	I0910 19:02:21.371142   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.371150   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:21.371156   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:21.371214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:21.405657   72122 cri.go:89] found id: ""
	I0910 19:02:21.405684   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.405695   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:21.405703   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:21.405755   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:21.440354   72122 cri.go:89] found id: ""
	I0910 19:02:21.440381   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.440392   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:21.440400   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:21.440464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:21.480165   72122 cri.go:89] found id: ""
	I0910 19:02:21.480189   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.480199   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:21.480206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:21.480273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:21.518422   72122 cri.go:89] found id: ""
	I0910 19:02:21.518449   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.518459   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:21.518470   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:21.518486   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:21.572263   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:21.572300   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:21.588179   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:21.588204   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:21.658330   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.658356   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:21.658371   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:21.743026   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:21.743063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:21.724730   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.724844   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:26.225026   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.034593   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.037588   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.164712   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.664475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:24.286604   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:24.299783   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:24.299847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:24.336998   72122 cri.go:89] found id: ""
	I0910 19:02:24.337031   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.337042   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:24.337050   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:24.337123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:24.374198   72122 cri.go:89] found id: ""
	I0910 19:02:24.374223   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.374231   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:24.374236   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:24.374289   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:24.407783   72122 cri.go:89] found id: ""
	I0910 19:02:24.407812   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.407822   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:24.407828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:24.407881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:24.443285   72122 cri.go:89] found id: ""
	I0910 19:02:24.443307   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.443315   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:24.443321   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:24.443367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:24.477176   72122 cri.go:89] found id: ""
	I0910 19:02:24.477198   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.477206   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:24.477212   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:24.477266   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:24.509762   72122 cri.go:89] found id: ""
	I0910 19:02:24.509783   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.509791   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:24.509797   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:24.509858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:24.548746   72122 cri.go:89] found id: ""
	I0910 19:02:24.548775   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.548785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:24.548793   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:24.548851   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:24.583265   72122 cri.go:89] found id: ""
	I0910 19:02:24.583297   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.583313   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:24.583324   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:24.583338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:24.634966   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:24.635001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:24.649844   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:24.649869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:24.721795   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:24.721824   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:24.721840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:24.807559   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:24.807593   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.352779   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:27.366423   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:27.366495   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:27.399555   72122 cri.go:89] found id: ""
	I0910 19:02:27.399582   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.399591   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:27.399596   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:27.399662   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:27.434151   72122 cri.go:89] found id: ""
	I0910 19:02:27.434179   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.434188   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:27.434194   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:27.434265   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:27.467053   72122 cri.go:89] found id: ""
	I0910 19:02:27.467081   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.467092   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:27.467099   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:27.467156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:28.724149   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:31.224185   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.533697   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:29.533815   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.034343   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.667816   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:30.164174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.500999   72122 cri.go:89] found id: ""
	I0910 19:02:27.501030   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.501039   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:27.501044   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:27.501114   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:27.537981   72122 cri.go:89] found id: ""
	I0910 19:02:27.538000   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.538007   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:27.538012   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:27.538060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:27.568622   72122 cri.go:89] found id: ""
	I0910 19:02:27.568649   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.568660   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:27.568668   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:27.568724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:27.603035   72122 cri.go:89] found id: ""
	I0910 19:02:27.603058   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.603067   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:27.603072   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:27.603131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:27.637624   72122 cri.go:89] found id: ""
	I0910 19:02:27.637651   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.637662   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:27.637673   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:27.637693   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:27.651893   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:27.651915   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:27.723949   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:27.723969   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:27.723983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:27.801463   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:27.801496   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.841969   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:27.842000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.398857   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:30.412720   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:30.412790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:30.448125   72122 cri.go:89] found id: ""
	I0910 19:02:30.448152   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.448163   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:30.448171   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:30.448234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:30.481988   72122 cri.go:89] found id: ""
	I0910 19:02:30.482016   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.482027   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:30.482035   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:30.482083   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:30.516548   72122 cri.go:89] found id: ""
	I0910 19:02:30.516576   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.516583   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:30.516589   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:30.516646   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:30.566884   72122 cri.go:89] found id: ""
	I0910 19:02:30.566910   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.566918   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:30.566923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:30.566975   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:30.602278   72122 cri.go:89] found id: ""
	I0910 19:02:30.602306   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.602314   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:30.602319   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:30.602379   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:30.636708   72122 cri.go:89] found id: ""
	I0910 19:02:30.636732   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.636740   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:30.636745   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:30.636797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:30.681255   72122 cri.go:89] found id: ""
	I0910 19:02:30.681280   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.681295   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:30.681303   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:30.681361   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:30.715516   72122 cri.go:89] found id: ""
	I0910 19:02:30.715543   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.715551   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:30.715560   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:30.715572   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.768916   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:30.768948   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:30.783318   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:30.783348   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:30.852901   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:30.852925   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:30.852940   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:30.932276   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:30.932314   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.725716   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.223970   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:34.533148   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.533854   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.667516   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:35.164375   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:33.471931   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:33.486152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:33.486211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:33.524130   72122 cri.go:89] found id: ""
	I0910 19:02:33.524161   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.524173   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:33.524180   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:33.524238   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:33.562216   72122 cri.go:89] found id: ""
	I0910 19:02:33.562238   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.562247   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:33.562252   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:33.562305   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:33.596587   72122 cri.go:89] found id: ""
	I0910 19:02:33.596615   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.596626   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:33.596634   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:33.596692   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:33.633307   72122 cri.go:89] found id: ""
	I0910 19:02:33.633330   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.633338   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:33.633343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:33.633403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:33.667780   72122 cri.go:89] found id: ""
	I0910 19:02:33.667805   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.667815   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:33.667820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:33.667878   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:33.702406   72122 cri.go:89] found id: ""
	I0910 19:02:33.702436   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.702447   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:33.702456   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:33.702524   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:33.744544   72122 cri.go:89] found id: ""
	I0910 19:02:33.744574   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.744581   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:33.744587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:33.744661   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:33.782000   72122 cri.go:89] found id: ""
	I0910 19:02:33.782024   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.782032   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:33.782040   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:33.782053   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:33.858087   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:33.858115   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:33.858133   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:33.943238   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:33.943278   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.987776   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:33.987804   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:34.043197   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:34.043232   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.558122   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:36.571125   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:36.571195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:36.606195   72122 cri.go:89] found id: ""
	I0910 19:02:36.606228   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.606239   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:36.606246   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:36.606304   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:36.640248   72122 cri.go:89] found id: ""
	I0910 19:02:36.640290   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.640302   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:36.640310   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:36.640360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:36.676916   72122 cri.go:89] found id: ""
	I0910 19:02:36.676942   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.676952   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:36.676958   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:36.677013   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:36.713183   72122 cri.go:89] found id: ""
	I0910 19:02:36.713207   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.713218   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:36.713225   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:36.713283   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:36.750748   72122 cri.go:89] found id: ""
	I0910 19:02:36.750775   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.750782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:36.750787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:36.750847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:36.782614   72122 cri.go:89] found id: ""
	I0910 19:02:36.782636   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.782644   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:36.782650   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:36.782709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:36.822051   72122 cri.go:89] found id: ""
	I0910 19:02:36.822083   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.822094   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:36.822102   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:36.822161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:36.856068   72122 cri.go:89] found id: ""
	I0910 19:02:36.856096   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.856106   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:36.856117   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:36.856132   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:36.909586   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:36.909625   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.931649   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:36.931676   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:37.040146   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:37.040175   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:37.040194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:37.121902   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:37.121942   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:38.723762   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:40.723880   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:38.534001   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:41.035356   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:37.665212   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.668115   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.164118   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.665474   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:39.678573   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:39.678633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:39.712755   72122 cri.go:89] found id: ""
	I0910 19:02:39.712783   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.712793   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:39.712800   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:39.712857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:39.744709   72122 cri.go:89] found id: ""
	I0910 19:02:39.744738   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.744748   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:39.744756   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:39.744809   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:39.780161   72122 cri.go:89] found id: ""
	I0910 19:02:39.780189   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.780200   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:39.780207   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:39.780255   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:39.817665   72122 cri.go:89] found id: ""
	I0910 19:02:39.817695   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.817704   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:39.817709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:39.817757   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:39.857255   72122 cri.go:89] found id: ""
	I0910 19:02:39.857291   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.857299   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:39.857306   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:39.857381   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:39.893514   72122 cri.go:89] found id: ""
	I0910 19:02:39.893540   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.893550   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:39.893558   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:39.893614   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:39.932720   72122 cri.go:89] found id: ""
	I0910 19:02:39.932753   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.932767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:39.932775   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:39.932835   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:39.977063   72122 cri.go:89] found id: ""
	I0910 19:02:39.977121   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.977135   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:39.977146   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:39.977168   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:39.991414   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:39.991445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:40.066892   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:40.066910   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:40.066922   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:40.150648   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:40.150680   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:40.198519   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:40.198561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:42.724332   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.223804   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:43.533841   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.534665   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:44.164851   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:46.165259   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.749906   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:42.769633   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:42.769703   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:42.812576   72122 cri.go:89] found id: ""
	I0910 19:02:42.812603   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.812613   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:42.812620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:42.812682   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:42.846233   72122 cri.go:89] found id: ""
	I0910 19:02:42.846257   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.846266   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:42.846271   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:42.846326   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:42.883564   72122 cri.go:89] found id: ""
	I0910 19:02:42.883593   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.883605   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:42.883612   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:42.883669   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:42.920774   72122 cri.go:89] found id: ""
	I0910 19:02:42.920801   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.920813   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:42.920820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:42.920883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:42.953776   72122 cri.go:89] found id: ""
	I0910 19:02:42.953808   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.953820   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:42.953829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:42.953887   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:42.989770   72122 cri.go:89] found id: ""
	I0910 19:02:42.989806   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.989820   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:42.989829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:42.989893   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:43.022542   72122 cri.go:89] found id: ""
	I0910 19:02:43.022567   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.022574   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:43.022580   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:43.022629   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:43.064308   72122 cri.go:89] found id: ""
	I0910 19:02:43.064329   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.064337   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:43.064344   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:43.064356   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:43.120212   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:43.120243   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:43.134269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:43.134296   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:43.218840   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:43.218865   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:43.218880   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:43.302560   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:43.302591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:45.842788   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:45.857495   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:45.857557   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:45.892745   72122 cri.go:89] found id: ""
	I0910 19:02:45.892772   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.892782   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:45.892790   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:45.892888   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:45.928451   72122 cri.go:89] found id: ""
	I0910 19:02:45.928476   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.928486   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:45.928493   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:45.928551   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:45.962868   72122 cri.go:89] found id: ""
	I0910 19:02:45.962899   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.962910   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:45.962918   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:45.962979   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:45.996975   72122 cri.go:89] found id: ""
	I0910 19:02:45.997000   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.997009   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:45.997014   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:45.997065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:46.032271   72122 cri.go:89] found id: ""
	I0910 19:02:46.032299   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.032309   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:46.032317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:46.032375   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:46.072629   72122 cri.go:89] found id: ""
	I0910 19:02:46.072654   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.072662   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:46.072667   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:46.072713   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:46.112196   72122 cri.go:89] found id: ""
	I0910 19:02:46.112220   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.112228   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:46.112233   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:46.112298   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:46.155700   72122 cri.go:89] found id: ""
	I0910 19:02:46.155732   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.155745   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:46.155759   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:46.155794   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:46.210596   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:46.210624   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:46.224951   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:46.224980   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:46.294571   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:46.294597   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:46.294613   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:46.382431   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:46.382495   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:47.224808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:49.225392   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:51.227601   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.033643   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.535490   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.665543   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.666596   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.926582   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:48.941256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:48.941338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:48.979810   72122 cri.go:89] found id: ""
	I0910 19:02:48.979842   72122 logs.go:276] 0 containers: []
	W0910 19:02:48.979849   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:48.979856   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:48.979917   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:49.015083   72122 cri.go:89] found id: ""
	I0910 19:02:49.015126   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.015136   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:49.015144   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:49.015205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:49.052417   72122 cri.go:89] found id: ""
	I0910 19:02:49.052445   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.052453   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:49.052459   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:49.052511   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:49.092485   72122 cri.go:89] found id: ""
	I0910 19:02:49.092523   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.092533   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:49.092538   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:49.092588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:49.127850   72122 cri.go:89] found id: ""
	I0910 19:02:49.127882   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.127889   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:49.127897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:49.127952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:49.160693   72122 cri.go:89] found id: ""
	I0910 19:02:49.160724   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.160733   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:49.160740   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:49.160798   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:49.194713   72122 cri.go:89] found id: ""
	I0910 19:02:49.194737   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.194744   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:49.194750   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:49.194804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:49.229260   72122 cri.go:89] found id: ""
	I0910 19:02:49.229283   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.229292   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:49.229303   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:49.229320   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:49.281963   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:49.281992   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:49.294789   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:49.294809   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:49.366126   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:49.366152   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:49.366172   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:49.451187   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:49.451225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:51.990361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:52.003744   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:52.003807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:52.036794   72122 cri.go:89] found id: ""
	I0910 19:02:52.036824   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.036834   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:52.036840   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:52.036896   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:52.074590   72122 cri.go:89] found id: ""
	I0910 19:02:52.074613   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.074620   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:52.074625   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:52.074675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:52.119926   72122 cri.go:89] found id: ""
	I0910 19:02:52.119967   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.119981   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:52.119990   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:52.120075   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:52.157862   72122 cri.go:89] found id: ""
	I0910 19:02:52.157889   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.157900   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:52.157906   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:52.157968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:52.198645   72122 cri.go:89] found id: ""
	I0910 19:02:52.198675   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.198686   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:52.198693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:52.198756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:52.240091   72122 cri.go:89] found id: ""
	I0910 19:02:52.240113   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.240129   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:52.240139   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:52.240197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:52.275046   72122 cri.go:89] found id: ""
	I0910 19:02:52.275079   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.275090   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:52.275098   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:52.275179   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:52.311141   72122 cri.go:89] found id: ""
	I0910 19:02:52.311172   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.311184   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:52.311196   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:52.311211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:52.400004   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:52.400039   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:52.449043   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:52.449090   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:53.724151   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:56.223353   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.033328   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.035259   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.164639   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.165714   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:52.502304   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:52.502336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:52.518747   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:52.518772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:52.593581   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.094092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:55.108752   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:55.108830   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:55.143094   72122 cri.go:89] found id: ""
	I0910 19:02:55.143122   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.143133   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:55.143141   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:55.143198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:55.184298   72122 cri.go:89] found id: ""
	I0910 19:02:55.184326   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.184334   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:55.184340   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:55.184397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:55.216557   72122 cri.go:89] found id: ""
	I0910 19:02:55.216585   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.216596   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:55.216613   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:55.216676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:55.251049   72122 cri.go:89] found id: ""
	I0910 19:02:55.251075   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.251083   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:55.251090   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:55.251152   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:55.282689   72122 cri.go:89] found id: ""
	I0910 19:02:55.282716   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.282724   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:55.282729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:55.282800   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:55.316959   72122 cri.go:89] found id: ""
	I0910 19:02:55.316993   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.317004   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:55.317011   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:55.317085   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:55.353110   72122 cri.go:89] found id: ""
	I0910 19:02:55.353134   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.353143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:55.353149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:55.353205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:55.392391   72122 cri.go:89] found id: ""
	I0910 19:02:55.392422   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.392434   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:55.392446   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:55.392461   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:55.445431   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:55.445469   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:55.459348   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:55.459374   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:55.528934   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.528957   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:55.528973   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:55.610797   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:55.610833   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:58.223882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.223951   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.533754   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:59.535018   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:01.535255   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.667276   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.164510   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:58.152775   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:58.166383   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:58.166440   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:58.203198   72122 cri.go:89] found id: ""
	I0910 19:02:58.203225   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.203233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:58.203239   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:58.203284   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:58.240538   72122 cri.go:89] found id: ""
	I0910 19:02:58.240560   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.240567   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:58.240573   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:58.240633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:58.274802   72122 cri.go:89] found id: ""
	I0910 19:02:58.274826   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.274833   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:58.274839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:58.274886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:58.311823   72122 cri.go:89] found id: ""
	I0910 19:02:58.311857   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.311868   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:58.311876   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:58.311933   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:58.347260   72122 cri.go:89] found id: ""
	I0910 19:02:58.347281   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.347288   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:58.347294   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:58.347338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:58.382621   72122 cri.go:89] found id: ""
	I0910 19:02:58.382645   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.382655   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:58.382662   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:58.382720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:58.418572   72122 cri.go:89] found id: ""
	I0910 19:02:58.418597   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.418605   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:58.418611   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:58.418663   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:58.459955   72122 cri.go:89] found id: ""
	I0910 19:02:58.459987   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.459995   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:58.460003   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:58.460016   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:58.512831   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:58.512866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:58.527036   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:58.527067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:58.593329   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:58.593350   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:58.593366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:58.671171   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:58.671201   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.211905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:01.226567   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:01.226724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:01.261860   72122 cri.go:89] found id: ""
	I0910 19:03:01.261885   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.261893   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:01.261898   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:01.261946   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:01.294754   72122 cri.go:89] found id: ""
	I0910 19:03:01.294774   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.294781   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:01.294786   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:01.294833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:01.328378   72122 cri.go:89] found id: ""
	I0910 19:03:01.328403   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.328412   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:01.328417   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:01.328465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:01.363344   72122 cri.go:89] found id: ""
	I0910 19:03:01.363370   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.363380   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:01.363388   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:01.363446   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:01.398539   72122 cri.go:89] found id: ""
	I0910 19:03:01.398576   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.398586   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:01.398593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:01.398654   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:01.431367   72122 cri.go:89] found id: ""
	I0910 19:03:01.431390   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.431397   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:01.431403   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:01.431458   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:01.464562   72122 cri.go:89] found id: ""
	I0910 19:03:01.464589   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.464599   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:01.464606   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:01.464666   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:01.497493   72122 cri.go:89] found id: ""
	I0910 19:03:01.497520   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.497531   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:01.497540   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:01.497555   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:01.583083   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:01.583140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.624887   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:01.624919   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:01.676124   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:01.676155   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:01.690861   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:01.690894   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:01.763695   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:02.724017   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.725049   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.033371   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:06.033600   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:02.666137   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.669740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.164822   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.264867   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:04.279106   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:04.279176   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:04.315358   72122 cri.go:89] found id: ""
	I0910 19:03:04.315390   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.315398   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:04.315403   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:04.315457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:04.359466   72122 cri.go:89] found id: ""
	I0910 19:03:04.359489   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.359496   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:04.359504   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:04.359563   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:04.399504   72122 cri.go:89] found id: ""
	I0910 19:03:04.399529   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.399538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:04.399545   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:04.399604   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:04.438244   72122 cri.go:89] found id: ""
	I0910 19:03:04.438269   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.438277   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:04.438282   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:04.438340   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:04.475299   72122 cri.go:89] found id: ""
	I0910 19:03:04.475321   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.475329   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:04.475334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:04.475386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:04.516500   72122 cri.go:89] found id: ""
	I0910 19:03:04.516520   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.516529   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:04.516534   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:04.516588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:04.551191   72122 cri.go:89] found id: ""
	I0910 19:03:04.551214   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.551222   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:04.551228   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:04.551273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:04.585646   72122 cri.go:89] found id: ""
	I0910 19:03:04.585667   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.585675   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:04.585684   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:04.585699   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:04.598832   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:04.598858   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:04.670117   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:04.670140   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:04.670156   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:04.746592   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:04.746626   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:04.784061   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:04.784088   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.337082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:07.350696   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:07.350752   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:07.387344   72122 cri.go:89] found id: ""
	I0910 19:03:07.387373   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.387384   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:07.387391   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:07.387449   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:07.420468   72122 cri.go:89] found id: ""
	I0910 19:03:07.420490   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.420498   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:07.420503   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:07.420566   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:07.453746   72122 cri.go:89] found id: ""
	I0910 19:03:07.453773   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.453784   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:07.453791   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:07.453845   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:07.487359   72122 cri.go:89] found id: ""
	I0910 19:03:07.487388   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.487400   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:07.487407   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:07.487470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:07.223432   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.723164   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:08.033767   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:10.035613   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.165972   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:11.663740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.520803   72122 cri.go:89] found id: ""
	I0910 19:03:07.520827   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.520834   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:07.520839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:07.520898   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:07.556908   72122 cri.go:89] found id: ""
	I0910 19:03:07.556934   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.556945   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:07.556953   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:07.557017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:07.596072   72122 cri.go:89] found id: ""
	I0910 19:03:07.596093   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.596102   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:07.596107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:07.596165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:07.631591   72122 cri.go:89] found id: ""
	I0910 19:03:07.631620   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.631630   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:07.631639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:07.631661   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.683892   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:07.683923   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:07.697619   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:07.697645   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:07.766370   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:07.766397   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:07.766413   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:07.854102   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:07.854140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:10.400185   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:10.412771   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:10.412842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:10.447710   72122 cri.go:89] found id: ""
	I0910 19:03:10.447739   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.447750   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:10.447757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:10.447822   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:10.480865   72122 cri.go:89] found id: ""
	I0910 19:03:10.480892   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.480902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:10.480909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:10.480966   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:10.514893   72122 cri.go:89] found id: ""
	I0910 19:03:10.514919   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.514927   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:10.514933   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:10.514994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:10.556332   72122 cri.go:89] found id: ""
	I0910 19:03:10.556374   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.556385   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:10.556392   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:10.556457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:10.590529   72122 cri.go:89] found id: ""
	I0910 19:03:10.590562   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.590573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:10.590581   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:10.590642   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:10.623697   72122 cri.go:89] found id: ""
	I0910 19:03:10.623724   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.623732   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:10.623737   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:10.623788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:10.659236   72122 cri.go:89] found id: ""
	I0910 19:03:10.659259   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.659270   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:10.659277   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:10.659338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:10.693150   72122 cri.go:89] found id: ""
	I0910 19:03:10.693182   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.693192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:10.693202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:10.693217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:10.744624   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:10.744663   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:10.758797   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:10.758822   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:10.853796   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:10.853815   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:10.853827   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:10.937972   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:10.938019   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:11.724808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:14.224052   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:12.535134   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:15.033867   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:17.034507   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.667548   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:16.164483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.481898   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:13.495440   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:13.495505   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:13.531423   72122 cri.go:89] found id: ""
	I0910 19:03:13.531452   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.531463   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:13.531470   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:13.531532   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:13.571584   72122 cri.go:89] found id: ""
	I0910 19:03:13.571607   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.571615   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:13.571620   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:13.571674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:13.609670   72122 cri.go:89] found id: ""
	I0910 19:03:13.609695   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.609702   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:13.609707   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:13.609761   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:13.644726   72122 cri.go:89] found id: ""
	I0910 19:03:13.644755   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.644766   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:13.644773   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:13.644831   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:13.679692   72122 cri.go:89] found id: ""
	I0910 19:03:13.679722   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.679733   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:13.679741   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:13.679791   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:13.717148   72122 cri.go:89] found id: ""
	I0910 19:03:13.717177   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.717186   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:13.717192   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:13.717247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:13.755650   72122 cri.go:89] found id: ""
	I0910 19:03:13.755676   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.755688   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:13.755693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:13.755740   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:13.788129   72122 cri.go:89] found id: ""
	I0910 19:03:13.788158   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.788169   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:13.788179   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:13.788194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:13.865241   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:13.865277   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:13.909205   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:13.909233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:13.963495   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:13.963523   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:13.977311   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:13.977337   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:14.047015   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:16.547505   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:16.568333   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:16.568412   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:16.610705   72122 cri.go:89] found id: ""
	I0910 19:03:16.610734   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.610744   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:16.610752   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:16.610808   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:16.647307   72122 cri.go:89] found id: ""
	I0910 19:03:16.647333   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.647340   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:16.647345   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:16.647409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:16.684513   72122 cri.go:89] found id: ""
	I0910 19:03:16.684536   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.684544   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:16.684549   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:16.684602   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:16.718691   72122 cri.go:89] found id: ""
	I0910 19:03:16.718719   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.718729   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:16.718734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:16.718794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:16.753250   72122 cri.go:89] found id: ""
	I0910 19:03:16.753279   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.753291   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:16.753298   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:16.753358   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:16.788953   72122 cri.go:89] found id: ""
	I0910 19:03:16.788984   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.789001   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:16.789009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:16.789084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:16.823715   72122 cri.go:89] found id: ""
	I0910 19:03:16.823746   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.823760   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:16.823767   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:16.823837   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:16.858734   72122 cri.go:89] found id: ""
	I0910 19:03:16.858758   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.858770   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:16.858780   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:16.858795   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:16.897983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:16.898012   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:16.950981   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:16.951015   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:16.964809   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:16.964839   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:17.039142   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:17.039163   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:17.039177   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:16.724218   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.223909   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.533783   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:21.534203   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:18.164708   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:20.664302   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.619941   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:19.634432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:19.634489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:19.671220   72122 cri.go:89] found id: ""
	I0910 19:03:19.671246   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.671256   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:19.671264   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:19.671322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:19.704251   72122 cri.go:89] found id: ""
	I0910 19:03:19.704278   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.704294   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:19.704301   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:19.704347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:19.745366   72122 cri.go:89] found id: ""
	I0910 19:03:19.745393   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.745403   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:19.745410   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:19.745466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:19.781100   72122 cri.go:89] found id: ""
	I0910 19:03:19.781129   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.781136   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:19.781141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:19.781195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:19.817177   72122 cri.go:89] found id: ""
	I0910 19:03:19.817207   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.817219   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:19.817226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:19.817292   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:19.852798   72122 cri.go:89] found id: ""
	I0910 19:03:19.852829   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.852837   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:19.852842   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:19.852889   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:19.887173   72122 cri.go:89] found id: ""
	I0910 19:03:19.887200   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.887210   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:19.887219   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:19.887409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:19.922997   72122 cri.go:89] found id: ""
	I0910 19:03:19.923026   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.923038   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:19.923049   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:19.923063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:19.975703   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:19.975736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:19.989834   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:19.989866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:20.061312   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:20.061332   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:20.061344   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:20.143045   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:20.143080   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:21.723250   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:23.723771   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.724346   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:24.036790   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:26.533830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.664756   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.164650   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.681900   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:22.694860   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:22.694923   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:22.738529   72122 cri.go:89] found id: ""
	I0910 19:03:22.738553   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.738563   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:22.738570   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:22.738640   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:22.778102   72122 cri.go:89] found id: ""
	I0910 19:03:22.778132   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.778143   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:22.778150   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:22.778207   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:22.813273   72122 cri.go:89] found id: ""
	I0910 19:03:22.813307   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.813320   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:22.813334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:22.813397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:22.849613   72122 cri.go:89] found id: ""
	I0910 19:03:22.849637   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.849646   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:22.849651   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:22.849701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:22.883138   72122 cri.go:89] found id: ""
	I0910 19:03:22.883167   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.883178   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:22.883185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:22.883237   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:22.918521   72122 cri.go:89] found id: ""
	I0910 19:03:22.918550   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.918567   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:22.918574   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:22.918632   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:22.966657   72122 cri.go:89] found id: ""
	I0910 19:03:22.966684   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.966691   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:22.966701   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:22.966762   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:23.022254   72122 cri.go:89] found id: ""
	I0910 19:03:23.022282   72122 logs.go:276] 0 containers: []
	W0910 19:03:23.022290   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:23.022298   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:23.022309   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:23.082347   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:23.082386   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:23.096792   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:23.096814   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:23.172720   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:23.172740   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:23.172754   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:23.256155   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:23.256193   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:25.797211   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:25.810175   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:25.810234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:25.844848   72122 cri.go:89] found id: ""
	I0910 19:03:25.844876   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.844886   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:25.844901   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:25.844968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:25.877705   72122 cri.go:89] found id: ""
	I0910 19:03:25.877736   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.877747   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:25.877755   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:25.877807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:25.913210   72122 cri.go:89] found id: ""
	I0910 19:03:25.913238   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.913248   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:25.913256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:25.913316   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:25.947949   72122 cri.go:89] found id: ""
	I0910 19:03:25.947974   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.947984   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:25.947991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:25.948050   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:25.983487   72122 cri.go:89] found id: ""
	I0910 19:03:25.983511   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.983519   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:25.983524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:25.983573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:26.018176   72122 cri.go:89] found id: ""
	I0910 19:03:26.018201   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.018209   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:26.018214   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:26.018271   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:26.052063   72122 cri.go:89] found id: ""
	I0910 19:03:26.052087   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.052097   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:26.052104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:26.052165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:26.091919   72122 cri.go:89] found id: ""
	I0910 19:03:26.091949   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.091958   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:26.091968   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:26.091983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:26.146059   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:26.146094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:26.160529   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:26.160562   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:26.230742   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:26.230764   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:26.230778   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:26.313191   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:26.313222   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:27.724922   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:30.223811   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.039957   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:31.533256   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:27.665626   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.666857   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:32.165153   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:28.858457   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:28.873725   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:28.873788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:28.922685   72122 cri.go:89] found id: ""
	I0910 19:03:28.922717   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.922729   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:28.922737   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:28.922795   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:28.973236   72122 cri.go:89] found id: ""
	I0910 19:03:28.973260   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.973270   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:28.973277   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:28.973339   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:29.008999   72122 cri.go:89] found id: ""
	I0910 19:03:29.009049   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.009062   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:29.009081   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:29.009148   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:29.049009   72122 cri.go:89] found id: ""
	I0910 19:03:29.049037   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.049047   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:29.049056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:29.049131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:29.089543   72122 cri.go:89] found id: ""
	I0910 19:03:29.089564   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.089573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:29.089578   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:29.089648   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:29.126887   72122 cri.go:89] found id: ""
	I0910 19:03:29.126911   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.126918   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:29.126924   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:29.126969   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:29.161369   72122 cri.go:89] found id: ""
	I0910 19:03:29.161395   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.161405   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:29.161412   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:29.161474   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:29.199627   72122 cri.go:89] found id: ""
	I0910 19:03:29.199652   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.199661   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:29.199672   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:29.199691   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:29.268353   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:29.268386   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:29.268401   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:29.351470   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:29.351504   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:29.391768   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:29.391796   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:29.442705   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:29.442736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:31.957567   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:31.970218   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:31.970274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:32.004870   72122 cri.go:89] found id: ""
	I0910 19:03:32.004898   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.004908   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:32.004915   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:32.004971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:32.045291   72122 cri.go:89] found id: ""
	I0910 19:03:32.045322   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.045331   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:32.045337   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:32.045403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:32.085969   72122 cri.go:89] found id: ""
	I0910 19:03:32.085999   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.086007   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:32.086013   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:32.086067   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:32.120100   72122 cri.go:89] found id: ""
	I0910 19:03:32.120127   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.120135   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:32.120141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:32.120187   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:32.153977   72122 cri.go:89] found id: ""
	I0910 19:03:32.154004   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.154011   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:32.154016   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:32.154065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:32.195980   72122 cri.go:89] found id: ""
	I0910 19:03:32.196005   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.196013   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:32.196019   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:32.196068   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:32.233594   72122 cri.go:89] found id: ""
	I0910 19:03:32.233616   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.233623   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:32.233632   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:32.233677   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:32.268118   72122 cri.go:89] found id: ""
	I0910 19:03:32.268144   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.268152   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:32.268160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:32.268171   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:32.281389   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:32.281416   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:32.359267   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:32.359289   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:32.359304   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:32.445096   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:32.445137   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:32.483288   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:32.483325   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:32.224155   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.724191   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:33.537955   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.033801   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.663475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.665627   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:35.040393   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:35.053698   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:35.053750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:35.087712   72122 cri.go:89] found id: ""
	I0910 19:03:35.087742   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.087751   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:35.087757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:35.087802   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:35.125437   72122 cri.go:89] found id: ""
	I0910 19:03:35.125468   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.125482   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:35.125495   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:35.125562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:35.163885   72122 cri.go:89] found id: ""
	I0910 19:03:35.163914   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.163924   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:35.163931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:35.163989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:35.199426   72122 cri.go:89] found id: ""
	I0910 19:03:35.199459   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.199471   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:35.199479   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:35.199559   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:35.236388   72122 cri.go:89] found id: ""
	I0910 19:03:35.236408   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.236416   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:35.236421   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:35.236465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:35.274797   72122 cri.go:89] found id: ""
	I0910 19:03:35.274817   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.274825   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:35.274830   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:35.274874   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:35.308127   72122 cri.go:89] found id: ""
	I0910 19:03:35.308155   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.308166   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:35.308173   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:35.308230   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:35.340675   72122 cri.go:89] found id: ""
	I0910 19:03:35.340697   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.340704   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:35.340712   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:35.340727   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:35.390806   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:35.390842   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:35.404427   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:35.404458   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:35.471526   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:35.471560   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:35.471575   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:35.547469   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:35.547497   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:37.223464   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:39.224137   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.534280   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:40.534728   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.666077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.165483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.087127   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:38.100195   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:38.100251   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:38.135386   72122 cri.go:89] found id: ""
	I0910 19:03:38.135408   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.135416   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:38.135422   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:38.135480   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:38.168531   72122 cri.go:89] found id: ""
	I0910 19:03:38.168558   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.168568   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:38.168577   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:38.168639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:38.202931   72122 cri.go:89] found id: ""
	I0910 19:03:38.202958   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.202968   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:38.202974   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:38.203030   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:38.239185   72122 cri.go:89] found id: ""
	I0910 19:03:38.239209   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.239219   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:38.239226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:38.239279   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:38.276927   72122 cri.go:89] found id: ""
	I0910 19:03:38.276952   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.276961   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:38.276967   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:38.277035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:38.311923   72122 cri.go:89] found id: ""
	I0910 19:03:38.311951   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.311962   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:38.311971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:38.312034   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:38.344981   72122 cri.go:89] found id: ""
	I0910 19:03:38.345012   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.345023   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:38.345030   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:38.345099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:38.378012   72122 cri.go:89] found id: ""
	I0910 19:03:38.378037   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.378048   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:38.378058   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:38.378076   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:38.449361   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:38.449384   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:38.449396   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:38.530683   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:38.530713   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:38.570047   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:38.570073   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:38.620143   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:38.620176   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.134152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:41.148416   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:41.148509   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:41.186681   72122 cri.go:89] found id: ""
	I0910 19:03:41.186706   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.186713   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:41.186719   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:41.186767   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:41.221733   72122 cri.go:89] found id: ""
	I0910 19:03:41.221758   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.221769   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:41.221776   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:41.221834   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:41.256099   72122 cri.go:89] found id: ""
	I0910 19:03:41.256125   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.256136   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:41.256143   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:41.256194   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:41.289825   72122 cri.go:89] found id: ""
	I0910 19:03:41.289850   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.289860   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:41.289867   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:41.289926   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:41.323551   72122 cri.go:89] found id: ""
	I0910 19:03:41.323581   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.323594   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:41.323601   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:41.323659   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:41.356508   72122 cri.go:89] found id: ""
	I0910 19:03:41.356535   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.356546   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:41.356553   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:41.356608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:41.391556   72122 cri.go:89] found id: ""
	I0910 19:03:41.391579   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.391586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:41.391592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:41.391651   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:41.427685   72122 cri.go:89] found id: ""
	I0910 19:03:41.427711   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.427726   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:41.427743   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:41.427758   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:41.481970   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:41.482001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.495266   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:41.495290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:41.568334   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:41.568357   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:41.568370   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:41.650178   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:41.650211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:43.724494   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:46.223803   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.034100   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.035091   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.167877   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.664633   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:44.193665   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:44.209118   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:44.209197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:44.245792   72122 cri.go:89] found id: ""
	I0910 19:03:44.245819   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.245829   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:44.245834   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:44.245900   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:44.285673   72122 cri.go:89] found id: ""
	I0910 19:03:44.285699   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.285711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:44.285719   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:44.285787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:44.326471   72122 cri.go:89] found id: ""
	I0910 19:03:44.326495   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.326505   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:44.326520   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:44.326589   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:44.367864   72122 cri.go:89] found id: ""
	I0910 19:03:44.367890   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.367898   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:44.367907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:44.367954   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:44.407161   72122 cri.go:89] found id: ""
	I0910 19:03:44.407185   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.407193   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:44.407198   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:44.407256   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:44.446603   72122 cri.go:89] found id: ""
	I0910 19:03:44.446628   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.446638   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:44.446645   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:44.446705   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:44.486502   72122 cri.go:89] found id: ""
	I0910 19:03:44.486526   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.486536   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:44.486543   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:44.486605   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:44.524992   72122 cri.go:89] found id: ""
	I0910 19:03:44.525017   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.525025   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:44.525033   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:44.525044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:44.579387   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:44.579418   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:44.594045   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:44.594070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:44.678857   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:44.678883   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:44.678897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:44.763799   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:44.763830   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:47.305631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:47.319275   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:47.319347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:47.359199   72122 cri.go:89] found id: ""
	I0910 19:03:47.359222   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.359233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:47.359240   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:47.359300   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:47.397579   72122 cri.go:89] found id: ""
	I0910 19:03:47.397602   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.397610   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:47.397616   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:47.397674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:47.431114   72122 cri.go:89] found id: ""
	I0910 19:03:47.431138   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.431146   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:47.431151   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:47.431205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:47.470475   72122 cri.go:89] found id: ""
	I0910 19:03:47.470499   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.470509   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:47.470515   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:47.470573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:48.227625   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.725421   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.534967   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:49.027864   71529 pod_ready.go:82] duration metric: took 4m0.000448579s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:49.027890   71529 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0910 19:03:49.027905   71529 pod_ready.go:39] duration metric: took 4m14.536052937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:49.027929   71529 kubeadm.go:597] duration metric: took 4m22.283340761s to restartPrimaryControlPlane
	W0910 19:03:49.027982   71529 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:03:49.028009   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:03:47.668029   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.164077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.504484   72122 cri.go:89] found id: ""
	I0910 19:03:47.504509   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.504518   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:47.504524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:47.504577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:47.541633   72122 cri.go:89] found id: ""
	I0910 19:03:47.541651   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.541658   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:47.541663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:47.541706   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:47.579025   72122 cri.go:89] found id: ""
	I0910 19:03:47.579051   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.579060   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:47.579068   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:47.579123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:47.612333   72122 cri.go:89] found id: ""
	I0910 19:03:47.612359   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.612370   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:47.612380   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:47.612395   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:47.667214   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:47.667242   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:47.683425   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:47.683466   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:47.749510   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:47.749531   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:47.749543   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:47.830454   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:47.830487   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:50.373207   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:50.387191   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:50.387247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:50.422445   72122 cri.go:89] found id: ""
	I0910 19:03:50.422476   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.422488   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:50.422495   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:50.422562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:50.456123   72122 cri.go:89] found id: ""
	I0910 19:03:50.456145   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.456153   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:50.456157   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:50.456211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:50.488632   72122 cri.go:89] found id: ""
	I0910 19:03:50.488661   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.488672   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:50.488680   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:50.488736   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:50.523603   72122 cri.go:89] found id: ""
	I0910 19:03:50.523628   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.523636   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:50.523641   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:50.523699   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:50.559741   72122 cri.go:89] found id: ""
	I0910 19:03:50.559765   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.559773   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:50.559778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:50.559842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:50.595387   72122 cri.go:89] found id: ""
	I0910 19:03:50.595406   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.595414   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:50.595420   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:50.595472   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:50.628720   72122 cri.go:89] found id: ""
	I0910 19:03:50.628747   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.628767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:50.628774   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:50.628833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:50.660635   72122 cri.go:89] found id: ""
	I0910 19:03:50.660655   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.660663   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:50.660671   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:50.660683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:50.716517   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:50.716544   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:50.731411   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:50.731443   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:50.799252   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:50.799275   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:50.799290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:50.874490   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:50.874524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.222989   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225335   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225365   71627 pod_ready.go:82] duration metric: took 4m0.007907353s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:55.225523   71627 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:03:55.225534   71627 pod_ready.go:39] duration metric: took 4m2.40870138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:55.225551   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:03:55.225579   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:55.225629   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:55.270742   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:55.270761   71627 cri.go:89] found id: ""
	I0910 19:03:55.270768   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:55.270811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.276233   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:55.276283   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:55.316033   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:55.316051   71627 cri.go:89] found id: ""
	I0910 19:03:55.316058   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:55.316103   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.320441   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:55.320494   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:55.354406   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.354428   71627 cri.go:89] found id: ""
	I0910 19:03:55.354435   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:55.354482   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.358553   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:55.358621   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:55.393871   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.393896   71627 cri.go:89] found id: ""
	I0910 19:03:55.393904   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:55.393959   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.398102   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:55.398154   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:55.432605   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.432625   71627 cri.go:89] found id: ""
	I0910 19:03:55.432632   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:55.432686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.437631   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:55.437689   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:55.474250   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.474277   71627 cri.go:89] found id: ""
	I0910 19:03:55.474287   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:55.474352   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.479177   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:55.479235   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:55.514918   71627 cri.go:89] found id: ""
	I0910 19:03:55.514942   71627 logs.go:276] 0 containers: []
	W0910 19:03:55.514951   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:55.514956   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:55.515014   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:55.549310   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.549330   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.549335   71627 cri.go:89] found id: ""
	I0910 19:03:55.549347   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:55.549404   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.553420   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.557502   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:55.557531   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.592661   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:55.592685   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.629876   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:55.629908   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.668935   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:55.668963   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:55.685881   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:55.685906   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:55.815552   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:55.815578   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.854615   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:55.854640   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.906027   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:55.906069   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.943771   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:55.943808   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:52.666368   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.165213   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:53.417835   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:53.430627   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:53.430694   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:53.469953   72122 cri.go:89] found id: ""
	I0910 19:03:53.469981   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.469992   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:53.469999   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:53.470060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:53.503712   72122 cri.go:89] found id: ""
	I0910 19:03:53.503739   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.503750   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:53.503757   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:53.503814   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:53.539875   72122 cri.go:89] found id: ""
	I0910 19:03:53.539895   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.539902   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:53.539907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:53.539952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:53.575040   72122 cri.go:89] found id: ""
	I0910 19:03:53.575067   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.575078   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:53.575085   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:53.575159   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:53.611171   72122 cri.go:89] found id: ""
	I0910 19:03:53.611193   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.611201   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:53.611206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:53.611253   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:53.644467   72122 cri.go:89] found id: ""
	I0910 19:03:53.644494   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.644505   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:53.644513   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:53.644575   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:53.680886   72122 cri.go:89] found id: ""
	I0910 19:03:53.680913   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.680924   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:53.680931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:53.680989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:53.716834   72122 cri.go:89] found id: ""
	I0910 19:03:53.716863   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.716875   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:53.716885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:53.716900   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.755544   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:53.755568   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:53.807382   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:53.807411   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:53.820289   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:53.820311   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:53.891500   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:53.891524   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:53.891540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:56.472368   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:56.491939   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:56.492020   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:56.535575   72122 cri.go:89] found id: ""
	I0910 19:03:56.535603   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.535614   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:56.535620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:56.535672   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:56.570366   72122 cri.go:89] found id: ""
	I0910 19:03:56.570390   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.570398   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:56.570403   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:56.570452   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:56.609486   72122 cri.go:89] found id: ""
	I0910 19:03:56.609524   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.609535   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:56.609542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:56.609608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:56.650268   72122 cri.go:89] found id: ""
	I0910 19:03:56.650295   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.650305   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:56.650312   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:56.650371   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:56.689113   72122 cri.go:89] found id: ""
	I0910 19:03:56.689139   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.689146   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:56.689154   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:56.689214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:56.721546   72122 cri.go:89] found id: ""
	I0910 19:03:56.721568   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.721576   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:56.721582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:56.721639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:56.753149   72122 cri.go:89] found id: ""
	I0910 19:03:56.753171   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.753179   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:56.753185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:56.753233   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:56.786624   72122 cri.go:89] found id: ""
	I0910 19:03:56.786648   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.786658   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:56.786669   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.786683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.840243   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:56.840276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:56.854453   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:56.854475   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:56.928814   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:56.928835   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:56.928849   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:57.012360   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:57.012403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.443638   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:03:56.443684   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.498856   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.498897   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.573520   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:56.573548   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:56.621270   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:56.621301   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.173747   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.190441   71627 api_server.go:72] duration metric: took 4m14.110101643s to wait for apiserver process to appear ...
	I0910 19:03:59.190463   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:03:59.190495   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.190539   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.224716   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.224744   71627 cri.go:89] found id: ""
	I0910 19:03:59.224753   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:59.224811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.229345   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.229412   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.263589   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.263622   71627 cri.go:89] found id: ""
	I0910 19:03:59.263630   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:59.263686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.269664   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.269728   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.312201   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.312224   71627 cri.go:89] found id: ""
	I0910 19:03:59.312233   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:59.312288   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.317991   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.318067   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.360625   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.360650   71627 cri.go:89] found id: ""
	I0910 19:03:59.360657   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:59.360707   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.364948   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.365010   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.404075   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.404096   71627 cri.go:89] found id: ""
	I0910 19:03:59.404103   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:59.404149   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.408098   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.408141   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.443767   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.443792   71627 cri.go:89] found id: ""
	I0910 19:03:59.443802   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:59.443858   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.448348   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.448397   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.485373   71627 cri.go:89] found id: ""
	I0910 19:03:59.485401   71627 logs.go:276] 0 containers: []
	W0910 19:03:59.485409   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.485414   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:59.485470   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:59.522641   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.522660   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.522664   71627 cri.go:89] found id: ""
	I0910 19:03:59.522671   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:59.522726   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.527283   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.531256   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:59.531275   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.576358   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:59.576382   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.625938   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:59.625974   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.664362   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:59.664386   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.718655   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:59.718686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.763954   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.763984   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.785217   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:59.785248   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.836560   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:59.836604   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.878973   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:59.879001   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.929851   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:59.929878   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.400346   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.400384   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:00.442281   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:00.442307   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:00.510448   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:00.510480   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:57.665980   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.666054   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:01.668052   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.558561   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.572993   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.573094   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.618957   72122 cri.go:89] found id: ""
	I0910 19:03:59.618988   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.618999   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:59.619008   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.619072   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.662544   72122 cri.go:89] found id: ""
	I0910 19:03:59.662643   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.662661   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:59.662673   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.662750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.704323   72122 cri.go:89] found id: ""
	I0910 19:03:59.704349   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.704360   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:59.704367   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.704426   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.738275   72122 cri.go:89] found id: ""
	I0910 19:03:59.738301   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.738311   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:59.738317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.738367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.778887   72122 cri.go:89] found id: ""
	I0910 19:03:59.778922   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.778934   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:59.778944   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.779010   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.814953   72122 cri.go:89] found id: ""
	I0910 19:03:59.814985   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.814995   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:59.815003   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.815064   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.850016   72122 cri.go:89] found id: ""
	I0910 19:03:59.850048   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.850061   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.850069   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:59.850131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:59.887546   72122 cri.go:89] found id: ""
	I0910 19:03:59.887589   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.887600   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:59.887613   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:59.887632   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:59.938761   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.938784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.954572   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:59.954603   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:04:00.029593   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:04:00.029622   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:00.029638   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.121427   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.121462   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:02.660924   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:02.674661   72122 kubeadm.go:597] duration metric: took 4m3.166175956s to restartPrimaryControlPlane
	W0910 19:04:02.674744   72122 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:04:02.674769   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:04:03.133507   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:03.150426   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:03.161678   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:03.173362   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:03.173389   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:03.173436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:03.183872   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:03.183934   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:03.193891   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:03.203385   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:03.203450   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:03.216255   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.227938   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:03.228001   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.240799   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:03.252871   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:03.252922   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:03.263682   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:03.337478   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:04:03.337564   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:03.506276   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:03.506454   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:03.506587   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:04:03.697062   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:03.698908   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:03.699004   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:03.699083   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:03.699184   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:03.699270   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:03.699361   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:03.699517   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:03.700040   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:03.700773   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:03.701529   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:03.702334   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:03.702627   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:03.702715   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:03.929760   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:03.992724   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:04.087552   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:04.226550   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:04.244695   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:04.246125   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:04.246187   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:04.396099   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:03.107779   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 19:04:03.112394   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 19:04:03.113347   71627 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:03.113367   71627 api_server.go:131] duration metric: took 3.922898577s to wait for apiserver health ...
	I0910 19:04:03.113375   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:03.113399   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:03.113443   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:03.153182   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.153204   71627 cri.go:89] found id: ""
	I0910 19:04:03.153213   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:04:03.153263   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.157842   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:03.157906   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:03.199572   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:03.199594   71627 cri.go:89] found id: ""
	I0910 19:04:03.199604   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:04:03.199658   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.204332   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:03.204409   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:03.252660   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.252686   71627 cri.go:89] found id: ""
	I0910 19:04:03.252696   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:04:03.252751   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.257850   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:03.257915   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:03.300208   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:03.300226   71627 cri.go:89] found id: ""
	I0910 19:04:03.300235   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:04:03.300294   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.304875   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:03.304953   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:03.346705   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.346734   71627 cri.go:89] found id: ""
	I0910 19:04:03.346744   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:04:03.346807   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.351246   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:03.351314   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:03.391218   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.391240   71627 cri.go:89] found id: ""
	I0910 19:04:03.391247   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:04:03.391290   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.396156   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:03.396264   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:03.437436   71627 cri.go:89] found id: ""
	I0910 19:04:03.437464   71627 logs.go:276] 0 containers: []
	W0910 19:04:03.437473   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:03.437479   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:03.437551   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:03.476396   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.476417   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.476420   71627 cri.go:89] found id: ""
	I0910 19:04:03.476427   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:04:03.476481   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.480969   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.485821   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:03.485843   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:03.537042   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:04:03.537079   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.599059   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:04:03.599102   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.637541   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:04:03.637576   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.682203   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:04:03.682234   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.734965   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:04:03.734992   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.769711   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:04:03.769738   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.805970   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:03.805999   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:04.165756   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:04.165796   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:04.254572   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:04.254609   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:04.272637   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:04.272686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:04.421716   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:04:04.421756   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:04.476657   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:04:04.476701   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:07.038592   71627 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:07.038618   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.038624   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.038628   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.038632   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.038636   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.038639   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.038644   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.038651   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.038658   71627 system_pods.go:74] duration metric: took 3.925277367s to wait for pod list to return data ...
	I0910 19:04:07.038667   71627 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:07.040831   71627 default_sa.go:45] found service account: "default"
	I0910 19:04:07.040854   71627 default_sa.go:55] duration metric: took 2.180848ms for default service account to be created ...
	I0910 19:04:07.040864   71627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:07.045130   71627 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:07.045151   71627 system_pods.go:89] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.045157   71627 system_pods.go:89] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.045162   71627 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.045167   71627 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.045171   71627 system_pods.go:89] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.045175   71627 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.045180   71627 system_pods.go:89] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.045184   71627 system_pods.go:89] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.045191   71627 system_pods.go:126] duration metric: took 4.321406ms to wait for k8s-apps to be running ...
	I0910 19:04:07.045200   71627 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:07.045242   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:07.061292   71627 system_svc.go:56] duration metric: took 16.084643ms WaitForService to wait for kubelet
	I0910 19:04:07.061318   71627 kubeadm.go:582] duration metric: took 4m21.980981405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:07.061342   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:07.064260   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:07.064277   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:07.064288   71627 node_conditions.go:105] duration metric: took 2.940712ms to run NodePressure ...
	I0910 19:04:07.064298   71627 start.go:241] waiting for startup goroutines ...
	I0910 19:04:07.064308   71627 start.go:246] waiting for cluster config update ...
	I0910 19:04:07.064318   71627 start.go:255] writing updated cluster config ...
	I0910 19:04:07.064627   71627 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:07.109814   71627 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:07.111804   71627 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-557504" cluster and "default" namespace by default
	I0910 19:04:04.165083   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:06.663618   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:04.397627   72122 out.go:235]   - Booting up control plane ...
	I0910 19:04:04.397763   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:04.405199   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:04.407281   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:04.408182   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:04.411438   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:04:08.667046   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:11.164622   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.461731   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.433698154s)
	I0910 19:04:15.461801   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:15.483515   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:15.497133   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:15.513903   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:15.513924   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:15.513972   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:15.524468   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:15.524529   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:15.534726   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:15.544892   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:15.544944   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:15.554663   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.564884   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:15.564978   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.574280   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:15.583882   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:15.583932   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:15.593971   71529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:15.639220   71529 kubeadm.go:310] W0910 19:04:15.612221    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.641412   71529 kubeadm.go:310] W0910 19:04:15.614470    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.749471   71529 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:04:13.164865   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.165232   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:17.664384   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:19.664943   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:22.166309   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:24.300945   71529 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 19:04:24.301016   71529 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:24.301143   71529 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:24.301274   71529 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:24.301408   71529 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 19:04:24.301517   71529 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:24.302988   71529 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:24.303079   71529 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:24.303132   71529 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:24.303197   71529 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:24.303252   71529 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:24.303315   71529 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:24.303367   71529 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:24.303443   71529 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:24.303517   71529 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:24.303631   71529 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:24.303737   71529 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:24.303792   71529 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:24.303873   71529 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:24.303954   71529 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:24.304037   71529 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:04:24.304120   71529 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:24.304217   71529 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:24.304299   71529 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:24.304423   71529 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:24.304523   71529 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:24.305839   71529 out.go:235]   - Booting up control plane ...
	I0910 19:04:24.305946   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:24.306046   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:24.306123   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:24.306254   71529 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:24.306338   71529 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:24.306387   71529 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:24.306507   71529 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 19:04:24.306608   71529 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:04:24.306679   71529 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.526264ms
	I0910 19:04:24.306748   71529 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 19:04:24.306801   71529 kubeadm.go:310] [api-check] The API server is healthy after 5.501960865s
	I0910 19:04:24.306887   71529 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 19:04:24.306997   71529 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 19:04:24.307045   71529 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 19:04:24.307202   71529 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-347802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 19:04:24.307250   71529 kubeadm.go:310] [bootstrap-token] Using token: 3uw8fx.h3bliquui6tuj5mh
	I0910 19:04:24.308589   71529 out.go:235]   - Configuring RBAC rules ...
	I0910 19:04:24.308728   71529 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 19:04:24.308847   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 19:04:24.309021   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 19:04:24.309197   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 19:04:24.309330   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 19:04:24.309437   71529 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 19:04:24.309612   71529 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 19:04:24.309681   71529 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 19:04:24.309776   71529 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 19:04:24.309787   71529 kubeadm.go:310] 
	I0910 19:04:24.309865   71529 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 19:04:24.309874   71529 kubeadm.go:310] 
	I0910 19:04:24.309951   71529 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 19:04:24.309963   71529 kubeadm.go:310] 
	I0910 19:04:24.309984   71529 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 19:04:24.310033   71529 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 19:04:24.310085   71529 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 19:04:24.310091   71529 kubeadm.go:310] 
	I0910 19:04:24.310152   71529 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 19:04:24.310164   71529 kubeadm.go:310] 
	I0910 19:04:24.310203   71529 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 19:04:24.310214   71529 kubeadm.go:310] 
	I0910 19:04:24.310262   71529 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 19:04:24.310326   71529 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 19:04:24.310383   71529 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 19:04:24.310390   71529 kubeadm.go:310] 
	I0910 19:04:24.310457   71529 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 19:04:24.310525   71529 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 19:04:24.310531   71529 kubeadm.go:310] 
	I0910 19:04:24.310598   71529 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310705   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 19:04:24.310728   71529 kubeadm.go:310] 	--control-plane 
	I0910 19:04:24.310731   71529 kubeadm.go:310] 
	I0910 19:04:24.310806   71529 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 19:04:24.310814   71529 kubeadm.go:310] 
	I0910 19:04:24.310884   71529 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310978   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 19:04:24.310994   71529 cni.go:84] Creating CNI manager for ""
	I0910 19:04:24.311006   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:04:24.312411   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:04:24.313516   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:04:24.326066   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:04:24.346367   71529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:04:24.346446   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.346475   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-347802 minikube.k8s.io/updated_at=2024_09_10T19_04_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=no-preload-347802 minikube.k8s.io/primary=true
	I0910 19:04:24.374396   71529 ops.go:34] apiserver oom_adj: -16
	I0910 19:04:24.561164   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.061938   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.561435   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.062175   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.561899   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:27.061256   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.664345   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:26.666316   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:27.561862   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.061889   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.562200   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.732352   71529 kubeadm.go:1113] duration metric: took 4.385961888s to wait for elevateKubeSystemPrivileges
	I0910 19:04:28.732387   71529 kubeadm.go:394] duration metric: took 5m2.035769941s to StartCluster
	I0910 19:04:28.732410   71529 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.732497   71529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:04:28.735625   71529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.735909   71529 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:04:28.736234   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:04:28.736296   71529 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:04:28.736417   71529 addons.go:69] Setting storage-provisioner=true in profile "no-preload-347802"
	I0910 19:04:28.736445   71529 addons.go:234] Setting addon storage-provisioner=true in "no-preload-347802"
	W0910 19:04:28.736453   71529 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:04:28.736480   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.736667   71529 addons.go:69] Setting default-storageclass=true in profile "no-preload-347802"
	I0910 19:04:28.736674   71529 addons.go:69] Setting metrics-server=true in profile "no-preload-347802"
	I0910 19:04:28.736703   71529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-347802"
	I0910 19:04:28.736717   71529 addons.go:234] Setting addon metrics-server=true in "no-preload-347802"
	W0910 19:04:28.736727   71529 addons.go:243] addon metrics-server should already be in state true
	I0910 19:04:28.736758   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.737346   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737360   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737401   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737709   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737809   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737832   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737891   71529 out.go:177] * Verifying Kubernetes components...
	I0910 19:04:28.739122   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:04:28.755720   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0910 19:04:28.755754   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0910 19:04:28.756110   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0910 19:04:28.756297   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756298   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756688   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756870   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.756891   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757053   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757092   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757426   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757451   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.757637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.757759   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.758328   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.758368   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.760809   71529 addons.go:234] Setting addon default-storageclass=true in "no-preload-347802"
	W0910 19:04:28.760825   71529 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:04:28.760848   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.761254   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.761285   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.761486   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.761994   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.762024   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.775766   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0910 19:04:28.776199   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.776801   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.776824   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.777167   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.777359   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.777651   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0910 19:04:28.778091   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.778678   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.778696   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.779019   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.779215   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.779616   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.780231   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0910 19:04:28.780605   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.780675   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.781156   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.781183   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.781330   71529 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:04:28.781416   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.781810   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.781841   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.782326   71529 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:04:28.782391   71529 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:28.782408   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:04:28.782425   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.783393   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:04:28.783413   71529 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:04:28.783433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.785287   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785763   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.785792   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785948   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.786114   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.786250   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.786397   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.786768   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787101   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.787124   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787330   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.787492   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.787637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.787747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.802599   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0910 19:04:28.802947   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.803402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.803415   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.803711   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.803882   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.805296   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.805498   71529 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:28.805510   71529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:04:28.805523   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.808615   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809041   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.809056   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809333   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.809518   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.809687   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.809792   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.974399   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:04:29.068531   71529 node_ready.go:35] waiting up to 6m0s for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084281   71529 node_ready.go:49] node "no-preload-347802" has status "Ready":"True"
	I0910 19:04:29.084306   71529 node_ready.go:38] duration metric: took 15.737646ms for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084317   71529 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:29.098794   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:29.122272   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:29.132813   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:29.191758   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:04:29.191777   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:04:29.224998   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:04:29.225019   71529 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:04:29.264455   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:29.264489   71529 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:04:29.369504   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:30.199702   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.066859027s)
	I0910 19:04:30.199757   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199769   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.199850   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077541595s)
	I0910 19:04:30.199895   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199909   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200096   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200135   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200147   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200155   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200154   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200174   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200187   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200201   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200209   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200220   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200387   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200402   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200617   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200655   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200680   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.219416   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.219437   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.219697   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.219705   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.219741   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.366927   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.366957   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367264   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367279   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367288   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.367302   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367506   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367520   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367533   71529 addons.go:475] Verifying addon metrics-server=true in "no-preload-347802"
	I0910 19:04:30.369968   71529 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:04:30.371186   71529 addons.go:510] duration metric: took 1.634894777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:04:31.104506   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:29.164993   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:31.668683   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:33.105761   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:35.606200   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:34.164783   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:36.663840   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:38.106188   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:39.106175   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.106199   71529 pod_ready.go:82] duration metric: took 10.007378894s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.106210   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111333   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.111352   71529 pod_ready.go:82] duration metric: took 5.13344ms for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111362   71529 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116673   71529 pod_ready.go:93] pod "etcd-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.116689   71529 pod_ready.go:82] duration metric: took 5.319986ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116697   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125400   71529 pod_ready.go:93] pod "kube-apiserver-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.125422   71529 pod_ready.go:82] duration metric: took 8.717835ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125433   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133790   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.133807   71529 pod_ready.go:82] duration metric: took 8.36626ms for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133818   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504642   71529 pod_ready.go:93] pod "kube-proxy-gwzhs" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.504665   71529 pod_ready.go:82] duration metric: took 370.840119ms for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504675   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903625   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.903646   71529 pod_ready.go:82] duration metric: took 398.964651ms for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903653   71529 pod_ready.go:39] duration metric: took 10.819325885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:39.903666   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:39.903710   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:39.918479   71529 api_server.go:72] duration metric: took 11.182520681s to wait for apiserver process to appear ...
	I0910 19:04:39.918501   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:39.918521   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 19:04:39.922745   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 19:04:39.923681   71529 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:39.923701   71529 api_server.go:131] duration metric: took 5.193102ms to wait for apiserver health ...
	I0910 19:04:39.923710   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:40.106587   71529 system_pods.go:59] 9 kube-system pods found
	I0910 19:04:40.106614   71529 system_pods.go:61] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.106619   71529 system_pods.go:61] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.106623   71529 system_pods.go:61] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.106626   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.106630   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.106633   71529 system_pods.go:61] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.106637   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.106642   71529 system_pods.go:61] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.106646   71529 system_pods.go:61] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.106652   71529 system_pods.go:74] duration metric: took 182.93737ms to wait for pod list to return data ...
	I0910 19:04:40.106662   71529 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:40.303294   71529 default_sa.go:45] found service account: "default"
	I0910 19:04:40.303316   71529 default_sa.go:55] duration metric: took 196.649242ms for default service account to be created ...
	I0910 19:04:40.303324   71529 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:40.506862   71529 system_pods.go:86] 9 kube-system pods found
	I0910 19:04:40.506894   71529 system_pods.go:89] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.506902   71529 system_pods.go:89] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.506908   71529 system_pods.go:89] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.506913   71529 system_pods.go:89] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.506919   71529 system_pods.go:89] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.506925   71529 system_pods.go:89] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.506931   71529 system_pods.go:89] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.506940   71529 system_pods.go:89] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.506949   71529 system_pods.go:89] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.506963   71529 system_pods.go:126] duration metric: took 203.633111ms to wait for k8s-apps to be running ...
	I0910 19:04:40.506974   71529 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:40.507032   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:40.522711   71529 system_svc.go:56] duration metric: took 15.728044ms WaitForService to wait for kubelet
	I0910 19:04:40.522739   71529 kubeadm.go:582] duration metric: took 11.786784927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:40.522761   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:40.702993   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:40.703011   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:40.703020   71529 node_conditions.go:105] duration metric: took 180.253729ms to run NodePressure ...
	I0910 19:04:40.703031   71529 start.go:241] waiting for startup goroutines ...
	I0910 19:04:40.703037   71529 start.go:246] waiting for cluster config update ...
	I0910 19:04:40.703046   71529 start.go:255] writing updated cluster config ...
	I0910 19:04:40.703329   71529 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:40.750434   71529 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:40.752453   71529 out.go:177] * Done! kubectl is now configured to use "no-preload-347802" cluster and "default" namespace by default
	I0910 19:04:37.670616   71183 pod_ready.go:82] duration metric: took 4m0.012645309s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:04:37.670637   71183 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:04:37.670644   71183 pod_ready.go:39] duration metric: took 4m3.614436373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:37.670658   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:37.670693   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:37.670746   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:37.721269   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:37.721295   71183 cri.go:89] found id: ""
	I0910 19:04:37.721303   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:37.721361   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.725648   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:37.725711   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:37.760937   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:37.760967   71183 cri.go:89] found id: ""
	I0910 19:04:37.760978   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:37.761034   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.765181   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:37.765243   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:37.800419   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:37.800447   71183 cri.go:89] found id: ""
	I0910 19:04:37.800457   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:37.800509   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.805255   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:37.805330   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:37.849032   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:37.849055   71183 cri.go:89] found id: ""
	I0910 19:04:37.849064   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:37.849136   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.853148   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:37.853224   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:37.888327   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:37.888352   71183 cri.go:89] found id: ""
	I0910 19:04:37.888361   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:37.888417   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.892721   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:37.892782   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:37.928648   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:37.928671   71183 cri.go:89] found id: ""
	I0910 19:04:37.928679   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:37.928731   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.932746   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:37.932804   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:37.967343   71183 cri.go:89] found id: ""
	I0910 19:04:37.967372   71183 logs.go:276] 0 containers: []
	W0910 19:04:37.967382   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:37.967387   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:37.967435   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:38.004150   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:38.004173   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:38.004176   71183 cri.go:89] found id: ""
	I0910 19:04:38.004183   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:38.004227   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.008118   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.011779   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:38.011799   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:38.026386   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:38.026405   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:38.149296   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:38.149324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:38.200987   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:38.201019   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:38.243953   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:38.243983   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:38.287242   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:38.287272   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:38.329165   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:38.329193   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:38.391117   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:38.391144   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:38.464906   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:38.464944   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:38.979681   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:38.979732   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:39.015604   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:39.015636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:39.055715   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:39.055748   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:39.103920   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:39.103952   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.650354   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:41.667568   71183 api_server.go:72] duration metric: took 4m15.330735169s to wait for apiserver process to appear ...
	I0910 19:04:41.667604   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:41.667636   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:41.667682   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:41.707476   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:41.707507   71183 cri.go:89] found id: ""
	I0910 19:04:41.707520   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:41.707590   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.711732   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:41.711794   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:41.745943   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:41.745963   71183 cri.go:89] found id: ""
	I0910 19:04:41.745972   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:41.746023   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.749930   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:41.749978   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:41.790296   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:41.790318   71183 cri.go:89] found id: ""
	I0910 19:04:41.790327   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:41.790388   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.794933   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:41.794988   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:41.840669   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:41.840695   71183 cri.go:89] found id: ""
	I0910 19:04:41.840704   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:41.840762   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.845674   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:41.845729   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:41.891686   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.891708   71183 cri.go:89] found id: ""
	I0910 19:04:41.891717   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:41.891774   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.896435   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:41.896486   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:41.935802   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:41.935829   71183 cri.go:89] found id: ""
	I0910 19:04:41.935838   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:41.935882   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.940924   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:41.940979   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:41.980326   71183 cri.go:89] found id: ""
	I0910 19:04:41.980349   71183 logs.go:276] 0 containers: []
	W0910 19:04:41.980357   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:41.980362   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:41.980409   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:42.021683   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.021701   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.021704   71183 cri.go:89] found id: ""
	I0910 19:04:42.021711   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:42.021760   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.025986   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.029896   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:42.029919   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:42.101147   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:42.101182   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:42.115299   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:42.115324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:42.230472   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:42.230503   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:42.285314   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:42.285341   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:42.338243   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:42.338283   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:42.380609   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:42.380636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:42.424255   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:42.424290   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:42.481943   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:42.481972   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:42.525590   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:42.525613   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.566519   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:42.566546   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.601221   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:42.601256   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:43.021780   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:43.021816   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:45.569149   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:04:45.575146   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:04:45.576058   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:45.576077   71183 api_server.go:131] duration metric: took 3.908465286s to wait for apiserver health ...
	I0910 19:04:45.576088   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:45.576112   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:45.576159   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:45.631224   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:45.631246   71183 cri.go:89] found id: ""
	I0910 19:04:45.631254   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:45.631310   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.636343   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:45.636408   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:45.675538   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:45.675558   71183 cri.go:89] found id: ""
	I0910 19:04:45.675565   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:45.675620   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.679865   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:45.679921   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:45.724808   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:45.724835   71183 cri.go:89] found id: ""
	I0910 19:04:45.724844   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:45.724898   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.729083   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:45.729141   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:45.762943   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:45.762965   71183 cri.go:89] found id: ""
	I0910 19:04:45.762973   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:45.763022   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.766889   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:45.766935   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:45.802849   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:45.802875   71183 cri.go:89] found id: ""
	I0910 19:04:45.802883   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:45.802924   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.806796   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:45.806860   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:45.841656   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:45.841675   71183 cri.go:89] found id: ""
	I0910 19:04:45.841682   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:45.841722   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.846078   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:45.846145   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:45.883750   71183 cri.go:89] found id: ""
	I0910 19:04:45.883773   71183 logs.go:276] 0 containers: []
	W0910 19:04:45.883787   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:45.883795   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:45.883857   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:45.918786   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:45.918815   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.918822   71183 cri.go:89] found id: ""
	I0910 19:04:45.918829   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:45.918876   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.923329   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.927395   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:45.927417   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.963527   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:45.963557   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:46.364843   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:46.364886   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:46.379339   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:46.379366   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:46.483159   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:46.483190   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:46.523850   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:46.523877   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:46.574864   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:46.574905   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:46.613765   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:46.613793   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:46.659791   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:46.659819   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:46.722103   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:46.722138   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:46.794098   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:46.794140   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:46.850112   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:46.850148   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:46.899733   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:46.899770   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:44.413134   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:04:44.413215   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:44.413400   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:49.448164   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:49.448194   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.448201   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.448207   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.448216   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.448220   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.448225   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.448232   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.448239   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.448248   71183 system_pods.go:74] duration metric: took 3.872154051s to wait for pod list to return data ...
	I0910 19:04:49.448255   71183 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:49.450795   71183 default_sa.go:45] found service account: "default"
	I0910 19:04:49.450816   71183 default_sa.go:55] duration metric: took 2.553358ms for default service account to be created ...
	I0910 19:04:49.450826   71183 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:49.454993   71183 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:49.455015   71183 system_pods.go:89] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.455020   71183 system_pods.go:89] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.455024   71183 system_pods.go:89] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.455030   71183 system_pods.go:89] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.455033   71183 system_pods.go:89] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.455038   71183 system_pods.go:89] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.455047   71183 system_pods.go:89] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.455053   71183 system_pods.go:89] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.455062   71183 system_pods.go:126] duration metric: took 4.230457ms to wait for k8s-apps to be running ...
	I0910 19:04:49.455073   71183 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:49.455130   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:49.471265   71183 system_svc.go:56] duration metric: took 16.184718ms WaitForService to wait for kubelet
	I0910 19:04:49.471293   71183 kubeadm.go:582] duration metric: took 4m23.134472506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:49.471320   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:49.475529   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:49.475548   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:49.475558   71183 node_conditions.go:105] duration metric: took 4.228611ms to run NodePressure ...
	I0910 19:04:49.475567   71183 start.go:241] waiting for startup goroutines ...
	I0910 19:04:49.475577   71183 start.go:246] waiting for cluster config update ...
	I0910 19:04:49.475589   71183 start.go:255] writing updated cluster config ...
	I0910 19:04:49.475827   71183 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:49.522354   71183 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:49.524738   71183 out.go:177] * Done! kubectl is now configured to use "embed-certs-836868" cluster and "default" namespace by default
	I0910 19:04:49.413796   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:49.413967   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:59.414341   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:59.414514   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:19.415680   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:19.415950   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.417770   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:59.418015   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.418035   72122 kubeadm.go:310] 
	I0910 19:05:59.418101   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:05:59.418137   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:05:59.418143   72122 kubeadm.go:310] 
	I0910 19:05:59.418178   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:05:59.418207   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:05:59.418313   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:05:59.418321   72122 kubeadm.go:310] 
	I0910 19:05:59.418443   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:05:59.418477   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:05:59.418519   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:05:59.418527   72122 kubeadm.go:310] 
	I0910 19:05:59.418625   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:05:59.418723   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:05:59.418731   72122 kubeadm.go:310] 
	I0910 19:05:59.418869   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:05:59.418976   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:05:59.419045   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:05:59.419141   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:05:59.419152   72122 kubeadm.go:310] 
	I0910 19:05:59.420015   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:05:59.420093   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:05:59.420165   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0910 19:05:59.420289   72122 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0910 19:05:59.420339   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:06:04.848652   72122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.428289133s)
	I0910 19:06:04.848719   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:06:04.862914   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:06:04.872903   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:06:04.872920   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:06:04.872960   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:06:04.882109   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:06:04.882168   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:06:04.890962   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:06:04.899925   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:06:04.899985   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:06:04.908796   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.917123   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:06:04.917173   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.925821   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:06:04.937885   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:06:04.937963   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:06:04.948108   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:06:05.019246   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:06:05.019321   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:06:05.162639   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:06:05.162770   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:06:05.162918   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:06:05.343270   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:06:05.345092   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:06:05.345189   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:06:05.345299   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:06:05.345417   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:06:05.345497   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:06:05.345606   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:06:05.345718   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:06:05.345981   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:06:05.346367   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:06:05.346822   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:06:05.347133   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:06:05.347246   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:06:05.347346   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:06:05.536681   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:06:05.773929   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:06:05.994857   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:06:06.139145   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:06:06.154510   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:06:06.155479   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:06:06.155548   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:06:06.311520   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:06:06.314167   72122 out.go:235]   - Booting up control plane ...
	I0910 19:06:06.314311   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:06:06.320856   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:06:06.321801   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:06:06.322508   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:06:06.324744   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:06:46.327168   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:06:46.327286   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:46.327534   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:06:51.328423   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:51.328643   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:01.329028   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:01.329315   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:21.329371   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:21.329627   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328238   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:08:01.328535   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328566   72122 kubeadm.go:310] 
	I0910 19:08:01.328625   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:08:01.328688   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:08:01.328701   72122 kubeadm.go:310] 
	I0910 19:08:01.328749   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:08:01.328797   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:08:01.328941   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:08:01.328953   72122 kubeadm.go:310] 
	I0910 19:08:01.329068   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:08:01.329136   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:08:01.329177   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:08:01.329191   72122 kubeadm.go:310] 
	I0910 19:08:01.329310   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:08:01.329377   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:08:01.329383   72122 kubeadm.go:310] 
	I0910 19:08:01.329468   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:08:01.329539   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:08:01.329607   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:08:01.329667   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:08:01.329674   72122 kubeadm.go:310] 
	I0910 19:08:01.330783   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:08:01.330892   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:08:01.330963   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 19:08:01.331020   72122 kubeadm.go:394] duration metric: took 8m1.874926868s to StartCluster
	I0910 19:08:01.331061   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:08:01.331117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:08:01.385468   72122 cri.go:89] found id: ""
	I0910 19:08:01.385492   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.385499   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:08:01.385505   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:08:01.385571   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:08:01.424028   72122 cri.go:89] found id: ""
	I0910 19:08:01.424051   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.424060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:08:01.424064   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:08:01.424121   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:08:01.462946   72122 cri.go:89] found id: ""
	I0910 19:08:01.462973   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.462983   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:08:01.462991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:08:01.463045   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:08:01.498242   72122 cri.go:89] found id: ""
	I0910 19:08:01.498269   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.498278   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:08:01.498283   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:08:01.498329   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:08:01.532917   72122 cri.go:89] found id: ""
	I0910 19:08:01.532946   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.532953   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:08:01.532959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:08:01.533011   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:08:01.567935   72122 cri.go:89] found id: ""
	I0910 19:08:01.567959   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.567967   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:08:01.567973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:08:01.568027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:08:01.601393   72122 cri.go:89] found id: ""
	I0910 19:08:01.601418   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.601426   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:08:01.601432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:08:01.601489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:08:01.639307   72122 cri.go:89] found id: ""
	I0910 19:08:01.639335   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.639345   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:08:01.639358   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:08:01.639373   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:08:01.726566   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:08:01.726591   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:08:01.726614   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:08:01.839965   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:08:01.840004   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:08:01.879658   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:08:01.879687   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:08:01.939066   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:08:01.939102   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0910 19:08:01.955390   72122 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0910 19:08:01.955436   72122 out.go:270] * 
	W0910 19:08:01.955500   72122 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.955524   72122 out.go:270] * 
	W0910 19:08:01.956343   72122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 19:08:01.959608   72122 out.go:201] 
	W0910 19:08:01.960877   72122 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.960929   72122 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0910 19:08:01.960957   72122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0910 19:08:01.962345   72122 out.go:201] 
	
	
	==> CRI-O <==
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.749559449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995622749540565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62fd97cd-bdff-45f7-a6ec-cef672e0439d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.750334225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4979fbed-c2c8-418b-88ca-0c989e81e86b name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.750385810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4979fbed-c2c8-418b-88ca-0c989e81e86b name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.750605034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91,PodSandboxId:ad112d1b49173406b211832777d2a4390fa2c3edba52ce58b3cecd45d0abe25b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725995070621710227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd77229d-0209-459f-ac5e-96317c425f60,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6,PodSandboxId:3f70057fd3e1e734f1da21d57f2d46424b49b6d27fb27bcb5d96533a4661375c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069699443775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bsp9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53cd67b5-b542-4b40-adf9-3aba78407735,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec,PodSandboxId:0c280d3aa3477243e23556c3523287fe0dcdf8bf1e28ca28f144f5f3f8174f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069734358994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
66ea46-d3ad-44e4-b9fc-c7ea5c44ac15,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7,PodSandboxId:3b6ce16a74304a93a1fb7dfdaca51600ca0799e9a32f9f21500c7e4ea343a451,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725995068750602687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwzhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f03fe8e3-bee7-4805-a1e9-83494f33105c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3,PodSandboxId:159f5089030cf1fc1dbda76d7ae4d886c637252905b7753376110260f746a900,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725995057967469975,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e3770379cbff17e47846b6d74e2aec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2,PodSandboxId:aa60d460716120ed6687da3dac83c3a806349e88d73a25fc0ca88ec46d056023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725995057941216061,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304,PodSandboxId:0fccef98c1bc1c3c65ba25cca14eaa722dbb92f836b278e098053564b2b884c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725995057912652155,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3176d1569984cba135ac1c183e76c043,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073,PodSandboxId:f6614203bea57cdc4b22bb6dde5e1705098501678f7204b4e86a3a3e10847d2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725995057867312138,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a0ba536e0e91b581bfa3eeec42067e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16,PodSandboxId:dd6a911567a0e33c229ceb4d2602bd6b819d60f6f707571ff407a755cc2601a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725994769351562769,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4979fbed-c2c8-418b-88ca-0c989e81e86b name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.788206666Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5599256c-83c1-44fc-8f14-b776b19bd9c4 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.788286074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5599256c-83c1-44fc-8f14-b776b19bd9c4 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.789561524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27d6fdcd-5fcf-4068-9cf0-40858fd61d27 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.790249180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995622790220382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27d6fdcd-5fcf-4068-9cf0-40858fd61d27 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.790823864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cbc7a7b-bb87-4f8f-843a-e3b9a8c059ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.790910949Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cbc7a7b-bb87-4f8f-843a-e3b9a8c059ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.791188266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91,PodSandboxId:ad112d1b49173406b211832777d2a4390fa2c3edba52ce58b3cecd45d0abe25b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725995070621710227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd77229d-0209-459f-ac5e-96317c425f60,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6,PodSandboxId:3f70057fd3e1e734f1da21d57f2d46424b49b6d27fb27bcb5d96533a4661375c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069699443775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bsp9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53cd67b5-b542-4b40-adf9-3aba78407735,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec,PodSandboxId:0c280d3aa3477243e23556c3523287fe0dcdf8bf1e28ca28f144f5f3f8174f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069734358994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
66ea46-d3ad-44e4-b9fc-c7ea5c44ac15,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7,PodSandboxId:3b6ce16a74304a93a1fb7dfdaca51600ca0799e9a32f9f21500c7e4ea343a451,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725995068750602687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwzhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f03fe8e3-bee7-4805-a1e9-83494f33105c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3,PodSandboxId:159f5089030cf1fc1dbda76d7ae4d886c637252905b7753376110260f746a900,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725995057967469975,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e3770379cbff17e47846b6d74e2aec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2,PodSandboxId:aa60d460716120ed6687da3dac83c3a806349e88d73a25fc0ca88ec46d056023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725995057941216061,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304,PodSandboxId:0fccef98c1bc1c3c65ba25cca14eaa722dbb92f836b278e098053564b2b884c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725995057912652155,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3176d1569984cba135ac1c183e76c043,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073,PodSandboxId:f6614203bea57cdc4b22bb6dde5e1705098501678f7204b4e86a3a3e10847d2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725995057867312138,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a0ba536e0e91b581bfa3eeec42067e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16,PodSandboxId:dd6a911567a0e33c229ceb4d2602bd6b819d60f6f707571ff407a755cc2601a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725994769351562769,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cbc7a7b-bb87-4f8f-843a-e3b9a8c059ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.829080630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=957edc5a-eedc-4268-8f11-586d91a6858a name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.829154463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=957edc5a-eedc-4268-8f11-586d91a6858a name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.830318131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42245ade-e3ab-408f-83f6-52b61b0d7d91 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.830687214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995622830668035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42245ade-e3ab-408f-83f6-52b61b0d7d91 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.831209866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0f54ccb-fa64-4379-9764-547b0eaef7e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.831267469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0f54ccb-fa64-4379-9764-547b0eaef7e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.831492887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91,PodSandboxId:ad112d1b49173406b211832777d2a4390fa2c3edba52ce58b3cecd45d0abe25b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725995070621710227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd77229d-0209-459f-ac5e-96317c425f60,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6,PodSandboxId:3f70057fd3e1e734f1da21d57f2d46424b49b6d27fb27bcb5d96533a4661375c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069699443775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bsp9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53cd67b5-b542-4b40-adf9-3aba78407735,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec,PodSandboxId:0c280d3aa3477243e23556c3523287fe0dcdf8bf1e28ca28f144f5f3f8174f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069734358994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
66ea46-d3ad-44e4-b9fc-c7ea5c44ac15,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7,PodSandboxId:3b6ce16a74304a93a1fb7dfdaca51600ca0799e9a32f9f21500c7e4ea343a451,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725995068750602687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwzhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f03fe8e3-bee7-4805-a1e9-83494f33105c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3,PodSandboxId:159f5089030cf1fc1dbda76d7ae4d886c637252905b7753376110260f746a900,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725995057967469975,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e3770379cbff17e47846b6d74e2aec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2,PodSandboxId:aa60d460716120ed6687da3dac83c3a806349e88d73a25fc0ca88ec46d056023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725995057941216061,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304,PodSandboxId:0fccef98c1bc1c3c65ba25cca14eaa722dbb92f836b278e098053564b2b884c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725995057912652155,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3176d1569984cba135ac1c183e76c043,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073,PodSandboxId:f6614203bea57cdc4b22bb6dde5e1705098501678f7204b4e86a3a3e10847d2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725995057867312138,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a0ba536e0e91b581bfa3eeec42067e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16,PodSandboxId:dd6a911567a0e33c229ceb4d2602bd6b819d60f6f707571ff407a755cc2601a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725994769351562769,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0f54ccb-fa64-4379-9764-547b0eaef7e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.865275776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e15194e-c90c-4a65-add6-6a8f82af3d6b name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.865350680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e15194e-c90c-4a65-add6-6a8f82af3d6b name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.866829430Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02006f2e-c17a-4e8e-bf77-cd2c549e93e2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.867438236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995622867412944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02006f2e-c17a-4e8e-bf77-cd2c549e93e2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.868187711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2665464-9a9e-4c84-8ea3-0f30939d1e87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.868257506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2665464-9a9e-4c84-8ea3-0f30939d1e87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:42 no-preload-347802 crio[712]: time="2024-09-10 19:13:42.868473733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91,PodSandboxId:ad112d1b49173406b211832777d2a4390fa2c3edba52ce58b3cecd45d0abe25b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725995070621710227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd77229d-0209-459f-ac5e-96317c425f60,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6,PodSandboxId:3f70057fd3e1e734f1da21d57f2d46424b49b6d27fb27bcb5d96533a4661375c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069699443775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bsp9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53cd67b5-b542-4b40-adf9-3aba78407735,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec,PodSandboxId:0c280d3aa3477243e23556c3523287fe0dcdf8bf1e28ca28f144f5f3f8174f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069734358994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
66ea46-d3ad-44e4-b9fc-c7ea5c44ac15,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7,PodSandboxId:3b6ce16a74304a93a1fb7dfdaca51600ca0799e9a32f9f21500c7e4ea343a451,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725995068750602687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwzhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f03fe8e3-bee7-4805-a1e9-83494f33105c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3,PodSandboxId:159f5089030cf1fc1dbda76d7ae4d886c637252905b7753376110260f746a900,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725995057967469975,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e3770379cbff17e47846b6d74e2aec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2,PodSandboxId:aa60d460716120ed6687da3dac83c3a806349e88d73a25fc0ca88ec46d056023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725995057941216061,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304,PodSandboxId:0fccef98c1bc1c3c65ba25cca14eaa722dbb92f836b278e098053564b2b884c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725995057912652155,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3176d1569984cba135ac1c183e76c043,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073,PodSandboxId:f6614203bea57cdc4b22bb6dde5e1705098501678f7204b4e86a3a3e10847d2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725995057867312138,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a0ba536e0e91b581bfa3eeec42067e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16,PodSandboxId:dd6a911567a0e33c229ceb4d2602bd6b819d60f6f707571ff407a755cc2601a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725994769351562769,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2665464-9a9e-4c84-8ea3-0f30939d1e87 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e348d2a5d1489       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ad112d1b49173       storage-provisioner
	35969d1ba960c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0c280d3aa3477       coredns-6f6b679f8f-hlbrz
	de828df738c57       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3f70057fd3e1e       coredns-6f6b679f8f-bsp9f
	631aa6381282f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   3b6ce16a74304       kube-proxy-gwzhs
	cc75973e43d51       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   159f5089030cf       etcd-no-preload-347802
	8968d7d3a3c02       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   aa60d46071612       kube-apiserver-no-preload-347802
	56abb8524eda6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   0fccef98c1bc1       kube-controller-manager-no-preload-347802
	24feaaf348edf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   f6614203bea57       kube-scheduler-no-preload-347802
	ec8014f1b16bf       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   dd6a911567a0e       kube-apiserver-no-preload-347802
	
	
	==> coredns [35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-347802
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-347802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=no-preload-347802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T19_04_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 19:04:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-347802
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:13:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:09:40 +0000   Tue, 10 Sep 2024 19:04:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:09:40 +0000   Tue, 10 Sep 2024 19:04:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:09:40 +0000   Tue, 10 Sep 2024 19:04:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:09:40 +0000   Tue, 10 Sep 2024 19:04:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.138
	  Hostname:    no-preload-347802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c3af0b1f4c84b17b5f7a7fa19478efe
	  System UUID:                0c3af0b1-f4c8-4b17-b5f7-a7fa19478efe
	  Boot ID:                    45e56c11-a123-4953-95e4-32947180dc98
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-bsp9f                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 coredns-6f6b679f8f-hlbrz                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 etcd-no-preload-347802                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m20s
	  kube-system                 kube-apiserver-no-preload-347802             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-no-preload-347802    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-gwzhs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-scheduler-no-preload-347802             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 metrics-server-6867b74b74-cz4tz              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m13s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m26s)  kubelet          Node no-preload-347802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m26s)  kubelet          Node no-preload-347802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m26s)  kubelet          Node no-preload-347802 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node no-preload-347802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node no-preload-347802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node no-preload-347802 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m16s                  node-controller  Node no-preload-347802 event: Registered Node no-preload-347802 in Controller
	  Normal  CIDRAssignmentFailed     9m16s                  cidrAllocator    Node no-preload-347802 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.040532] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.757802] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.373808] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep10 18:59] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.786004] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.054054] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053193] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.175985] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.147648] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.280478] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +15.753033] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.060564] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.210370] systemd-fstab-generator[1422]: Ignoring "noauto" option for root device
	[  +2.807161] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.299675] kauditd_printk_skb: 59 callbacks suppressed
	[  +8.421782] kauditd_printk_skb: 26 callbacks suppressed
	[Sep10 19:04] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.364469] systemd-fstab-generator[3063]: Ignoring "noauto" option for root device
	[  +4.536178] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.015224] systemd-fstab-generator[3385]: Ignoring "noauto" option for root device
	[  +5.261816] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.244627] systemd-fstab-generator[3542]: Ignoring "noauto" option for root device
	[  +8.413040] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3] <==
	{"level":"info","ts":"2024-09-10T19:04:18.444430Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T19:04:18.449231Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8b11dde95a80b86b","initial-advertise-peer-urls":["https://192.168.50.138:2380"],"listen-peer-urls":["https://192.168.50.138:2380"],"advertise-client-urls":["https://192.168.50.138:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.138:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T19:04:18.450107Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T19:04:18.446105Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.138:2380"}
	{"level":"info","ts":"2024-09-10T19:04:18.460989Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.138:2380"}
	{"level":"info","ts":"2024-09-10T19:04:19.269017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-10T19:04:19.269094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-10T19:04:19.269129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b received MsgPreVoteResp from 8b11dde95a80b86b at term 1"}
	{"level":"info","ts":"2024-09-10T19:04:19.269150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b became candidate at term 2"}
	{"level":"info","ts":"2024-09-10T19:04:19.269157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b received MsgVoteResp from 8b11dde95a80b86b at term 2"}
	{"level":"info","ts":"2024-09-10T19:04:19.269166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b became leader at term 2"}
	{"level":"info","ts":"2024-09-10T19:04:19.269173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8b11dde95a80b86b elected leader 8b11dde95a80b86b at term 2"}
	{"level":"info","ts":"2024-09-10T19:04:19.273402Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8b11dde95a80b86b","local-member-attributes":"{Name:no-preload-347802 ClientURLs:[https://192.168.50.138:2379]}","request-path":"/0/members/8b11dde95a80b86b/attributes","cluster-id":"ab0e41ccc9bb2ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T19:04:19.273488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:04:19.273536Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:04:19.274050Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:04:19.276629Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:04:19.279251Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ab0e41ccc9bb2ba","local-member-id":"8b11dde95a80b86b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:04:19.279366Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:04:19.279419Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:04:19.279890Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:04:19.280697Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T19:04:19.284902Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.138:2379"}
	{"level":"info","ts":"2024-09-10T19:04:19.283427Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T19:04:19.287062Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:13:43 up 14 min,  0 users,  load average: 0.21, 0.28, 0.25
	Linux no-preload-347802 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0910 19:09:21.776568       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:09:21.776640       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0910 19:09:21.777630       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:09:21.777706       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:10:21.778416       1 handler_proxy.go:99] no RequestInfo found in the context
	W0910 19:10:21.778417       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:10:21.778636       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0910 19:10:21.778754       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0910 19:10:21.780618       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:10:21.780680       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:12:21.781406       1 handler_proxy.go:99] no RequestInfo found in the context
	W0910 19:12:21.781406       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:12:21.781690       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0910 19:12:21.781739       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0910 19:12:21.783030       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:12:21.783046       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16] <==
	W0910 19:04:10.532296       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:11.263919       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:12.264891       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:13.725208       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:13.918186       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:13.950493       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:13.978202       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.131767       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.510408       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.701303       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.817808       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.824507       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.880277       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.933704       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.963834       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.978712       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.985553       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.073868       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.102785       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.106180       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.111568       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.159299       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.217746       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.323105       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.326522       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304] <==
	E0910 19:08:27.762731       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:08:28.208373       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:08:57.769745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:08:58.215670       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:09:27.786166       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:09:28.223931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:09:40.180363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-347802"
	E0910 19:09:57.793049       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:09:58.232220       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:10:26.605920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="225.42µs"
	E0910 19:10:27.801493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:10:28.247802       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:10:37.605647       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="100.58µs"
	E0910 19:10:57.808364       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:10:58.256409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:11:27.815530       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:11:28.265144       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:11:57.825151       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:11:58.273060       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:12:27.831680       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:12:28.285380       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:12:57.837760       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:12:58.293524       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:13:27.844530       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:13:28.301433       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 19:04:29.204580       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 19:04:29.241650       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.138"]
	E0910 19:04:29.241733       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 19:04:29.317028       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 19:04:29.317118       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 19:04:29.317149       1 server_linux.go:169] "Using iptables Proxier"
	I0910 19:04:29.323087       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 19:04:29.323380       1 server.go:483] "Version info" version="v1.31.0"
	I0910 19:04:29.323392       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:04:29.334518       1 config.go:104] "Starting endpoint slice config controller"
	I0910 19:04:29.334547       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 19:04:29.334569       1 config.go:197] "Starting service config controller"
	I0910 19:04:29.334573       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 19:04:29.334894       1 config.go:326] "Starting node config controller"
	I0910 19:04:29.334903       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 19:04:29.435504       1 shared_informer.go:320] Caches are synced for node config
	I0910 19:04:29.435550       1 shared_informer.go:320] Caches are synced for service config
	I0910 19:04:29.435584       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073] <==
	W0910 19:04:20.813636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 19:04:20.813662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:20.813705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 19:04:20.813732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:20.813864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 19:04:20.813993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.787002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 19:04:21.787057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.793353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 19:04:21.793400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.838149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 19:04:21.838283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.918257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 19:04:21.918494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.932093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 19:04:21.932147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.961218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 19:04:21.961385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:22.012205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 19:04:22.012312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:22.077814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 19:04:22.078374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:22.176014       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 19:04:22.176121       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0910 19:04:25.396224       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 19:12:33 no-preload-347802 kubelet[3392]: E0910 19:12:33.590293    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:12:33 no-preload-347802 kubelet[3392]: E0910 19:12:33.779637    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995553779145338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:33 no-preload-347802 kubelet[3392]: E0910 19:12:33.779682    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995553779145338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:43 no-preload-347802 kubelet[3392]: E0910 19:12:43.782024    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995563781359527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:43 no-preload-347802 kubelet[3392]: E0910 19:12:43.782287    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995563781359527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:44 no-preload-347802 kubelet[3392]: E0910 19:12:44.588008    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:12:53 no-preload-347802 kubelet[3392]: E0910 19:12:53.783818    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995573783124072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:53 no-preload-347802 kubelet[3392]: E0910 19:12:53.784208    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995573783124072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:55 no-preload-347802 kubelet[3392]: E0910 19:12:55.588194    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:13:03 no-preload-347802 kubelet[3392]: E0910 19:13:03.786057    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995583785498271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:03 no-preload-347802 kubelet[3392]: E0910 19:13:03.786104    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995583785498271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:10 no-preload-347802 kubelet[3392]: E0910 19:13:10.587296    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:13:13 no-preload-347802 kubelet[3392]: E0910 19:13:13.788169    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995593787822613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:13 no-preload-347802 kubelet[3392]: E0910 19:13:13.788209    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995593787822613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:23 no-preload-347802 kubelet[3392]: E0910 19:13:23.636988    3392 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:13:23 no-preload-347802 kubelet[3392]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:13:23 no-preload-347802 kubelet[3392]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:13:23 no-preload-347802 kubelet[3392]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:13:23 no-preload-347802 kubelet[3392]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:13:23 no-preload-347802 kubelet[3392]: E0910 19:13:23.789714    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995603789472157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:23 no-preload-347802 kubelet[3392]: E0910 19:13:23.789767    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995603789472157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:24 no-preload-347802 kubelet[3392]: E0910 19:13:24.588668    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:13:33 no-preload-347802 kubelet[3392]: E0910 19:13:33.791150    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995613790734049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:33 no-preload-347802 kubelet[3392]: E0910 19:13:33.791499    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995613790734049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:39 no-preload-347802 kubelet[3392]: E0910 19:13:39.587415    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	
	
	==> storage-provisioner [e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91] <==
	I0910 19:04:30.778938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 19:04:30.793237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 19:04:30.793313       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 19:04:30.808205       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 19:04:30.810141       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-347802_c1894828-b505-47d5-b2d2-2ccc297ff610!
	I0910 19:04:30.818308       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23084522-d675-468e-9a48-deddae300d23", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-347802_c1894828-b505-47d5-b2d2-2ccc297ff610 became leader
	I0910 19:04:30.910724       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-347802_c1894828-b505-47d5-b2d2-2ccc297ff610!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-347802 -n no-preload-347802
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-347802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cz4tz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-347802 describe pod metrics-server-6867b74b74-cz4tz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-347802 describe pod metrics-server-6867b74b74-cz4tz: exit status 1 (67.633352ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cz4tz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-347802 describe pod metrics-server-6867b74b74-cz4tz: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0910 19:05:38.987313   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:06:35.171282   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:07:04.781831   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:07:15.663814   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:07:55.870133   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-836868 -n embed-certs-836868
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-10 19:13:50.028321976 +0000 UTC m=+6289.852096744
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-836868 -n embed-certs-836868
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-836868 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-836868 logs -n 25: (2.017183179s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-642043 sudo cat                              | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo find                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo crio                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-642043                                       | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-186737 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | disable-driver-mounts-186737                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-836868            | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-347802             | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-557504  | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-836868                 | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-432422        | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-347802                  | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-557504       | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-432422             | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:56:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:56:02.487676   72122 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:56:02.487789   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487799   72122 out.go:358] Setting ErrFile to fd 2...
	I0910 18:56:02.487804   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487953   72122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:56:02.488491   72122 out.go:352] Setting JSON to false
	I0910 18:56:02.489572   72122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5914,"bootTime":1725988648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:56:02.489637   72122 start.go:139] virtualization: kvm guest
	I0910 18:56:02.491991   72122 out.go:177] * [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:56:02.493117   72122 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:56:02.493113   72122 notify.go:220] Checking for updates...
	I0910 18:56:02.494213   72122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:56:02.495356   72122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:56:02.496370   72122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:56:02.497440   72122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:56:02.498703   72122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:56:02.500450   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:56:02.501100   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.501150   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.515836   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0910 18:56:02.516286   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.516787   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.516815   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.517116   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.517300   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.519092   72122 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 18:56:02.520121   72122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:56:02.520405   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.520436   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.534860   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I0910 18:56:02.535243   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.535688   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.535711   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.536004   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.536215   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.570682   72122 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:56:02.571710   72122 start.go:297] selected driver: kvm2
	I0910 18:56:02.571722   72122 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.571821   72122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:56:02.572465   72122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.572528   72122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:56:02.587001   72122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:56:02.587381   72122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:56:02.587417   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:56:02.587427   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:56:02.587471   72122 start.go:340] cluster config:
	{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.587599   72122 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.589116   72122 out.go:177] * Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	I0910 18:56:02.590155   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:56:02.590185   72122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:56:02.590194   72122 cache.go:56] Caching tarball of preloaded images
	I0910 18:56:02.590294   72122 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:56:02.590313   72122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0910 18:56:02.590415   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:56:02.590612   72122 start.go:360] acquireMachinesLock for old-k8s-version-432422: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:56:08.097313   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:11.169360   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:17.249255   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:20.321326   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:26.401359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:29.473351   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:35.553474   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:38.625322   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:44.705324   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:47.777408   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:53.857373   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:56.929356   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:03.009354   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:06.081346   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:12.161342   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:15.233363   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:21.313385   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:24.385281   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:30.465347   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:33.537368   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:39.617395   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:42.689359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:48.769334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:51.841388   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:57.921359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:00.993375   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:07.073343   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:10.145433   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:16.225336   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:19.297345   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:25.377291   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:28.449365   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:34.529306   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:37.601300   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:43.681334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:46.753328   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:49.757234   71529 start.go:364] duration metric: took 4m17.481092907s to acquireMachinesLock for "no-preload-347802"
	I0910 18:58:49.757299   71529 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:58:49.757316   71529 fix.go:54] fixHost starting: 
	I0910 18:58:49.757667   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:58:49.757694   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:58:49.772681   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0910 18:58:49.773067   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:58:49.773498   71529 main.go:141] libmachine: Using API Version  1
	I0910 18:58:49.773518   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:58:49.773963   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:58:49.774127   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:58:49.774279   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 18:58:49.775704   71529 fix.go:112] recreateIfNeeded on no-preload-347802: state=Stopped err=<nil>
	I0910 18:58:49.775726   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	W0910 18:58:49.775886   71529 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:58:49.777669   71529 out.go:177] * Restarting existing kvm2 VM for "no-preload-347802" ...
	I0910 18:58:49.778739   71529 main.go:141] libmachine: (no-preload-347802) Calling .Start
	I0910 18:58:49.778882   71529 main.go:141] libmachine: (no-preload-347802) Ensuring networks are active...
	I0910 18:58:49.779509   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network default is active
	I0910 18:58:49.779824   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network mk-no-preload-347802 is active
	I0910 18:58:49.780121   71529 main.go:141] libmachine: (no-preload-347802) Getting domain xml...
	I0910 18:58:49.780766   71529 main.go:141] libmachine: (no-preload-347802) Creating domain...
	I0910 18:58:50.967405   71529 main.go:141] libmachine: (no-preload-347802) Waiting to get IP...
	I0910 18:58:50.968284   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:50.968647   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:50.968726   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:50.968628   72707 retry.go:31] will retry after 197.094328ms: waiting for machine to come up
	I0910 18:58:51.167237   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.167630   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.167683   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.167603   72707 retry.go:31] will retry after 272.376855ms: waiting for machine to come up
	I0910 18:58:51.441212   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.441673   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.441698   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.441636   72707 retry.go:31] will retry after 458.172114ms: waiting for machine to come up
	I0910 18:58:51.900991   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.901464   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.901498   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.901428   72707 retry.go:31] will retry after 442.42629ms: waiting for machine to come up
	I0910 18:58:49.754913   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:58:49.754977   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755310   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 18:58:49.755335   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755513   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 18:58:49.757052   71183 machine.go:96] duration metric: took 4m37.423474417s to provisionDockerMachine
	I0910 18:58:49.757138   71183 fix.go:56] duration metric: took 4m37.44458491s for fixHost
	I0910 18:58:49.757149   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 4m37.444613055s
	W0910 18:58:49.757173   71183 start.go:714] error starting host: provision: host is not running
	W0910 18:58:49.757263   71183 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0910 18:58:49.757273   71183 start.go:729] Will try again in 5 seconds ...
	I0910 18:58:52.345053   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:52.345519   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:52.345540   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:52.345463   72707 retry.go:31] will retry after 732.353971ms: waiting for machine to come up
	I0910 18:58:53.079229   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.079686   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.079714   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.079638   72707 retry.go:31] will retry after 658.057224ms: waiting for machine to come up
	I0910 18:58:53.739313   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.739750   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.739811   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.739732   72707 retry.go:31] will retry after 910.559952ms: waiting for machine to come up
	I0910 18:58:54.651714   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:54.652075   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:54.652099   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:54.652027   72707 retry.go:31] will retry after 1.410431493s: waiting for machine to come up
	I0910 18:58:56.063996   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:56.064396   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:56.064418   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:56.064360   72707 retry.go:31] will retry after 1.795467467s: waiting for machine to come up
	I0910 18:58:54.759533   71183 start.go:360] acquireMachinesLock for embed-certs-836868: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:58:57.862130   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:57.862484   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:57.862509   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:57.862453   72707 retry.go:31] will retry after 1.450403908s: waiting for machine to come up
	I0910 18:58:59.315197   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:59.315621   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:59.315657   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:59.315566   72707 retry.go:31] will retry after 1.81005281s: waiting for machine to come up
	I0910 18:59:01.128164   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:01.128611   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:01.128642   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:01.128563   72707 retry.go:31] will retry after 3.333505805s: waiting for machine to come up
	I0910 18:59:04.464526   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:04.465004   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:04.465030   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:04.464951   72707 retry.go:31] will retry after 3.603817331s: waiting for machine to come up
	I0910 18:59:09.257584   71627 start.go:364] duration metric: took 4m27.770499275s to acquireMachinesLock for "default-k8s-diff-port-557504"
	I0910 18:59:09.257656   71627 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:09.257673   71627 fix.go:54] fixHost starting: 
	I0910 18:59:09.258100   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:09.258144   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:09.276230   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0910 18:59:09.276622   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:09.277129   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:09.277151   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:09.277489   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:09.277663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:09.277793   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:09.279006   71627 fix.go:112] recreateIfNeeded on default-k8s-diff-port-557504: state=Stopped err=<nil>
	I0910 18:59:09.279043   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	W0910 18:59:09.279178   71627 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:09.281106   71627 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-557504" ...
	I0910 18:59:08.073057   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073638   71529 main.go:141] libmachine: (no-preload-347802) Found IP for machine: 192.168.50.138
	I0910 18:59:08.073660   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has current primary IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073666   71529 main.go:141] libmachine: (no-preload-347802) Reserving static IP address...
	I0910 18:59:08.074129   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.074153   71529 main.go:141] libmachine: (no-preload-347802) Reserved static IP address: 192.168.50.138
	I0910 18:59:08.074170   71529 main.go:141] libmachine: (no-preload-347802) DBG | skip adding static IP to network mk-no-preload-347802 - found existing host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"}
	I0910 18:59:08.074179   71529 main.go:141] libmachine: (no-preload-347802) Waiting for SSH to be available...
	I0910 18:59:08.074187   71529 main.go:141] libmachine: (no-preload-347802) DBG | Getting to WaitForSSH function...
	I0910 18:59:08.076434   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076744   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.076767   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076928   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH client type: external
	I0910 18:59:08.076950   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa (-rw-------)
	I0910 18:59:08.076979   71529 main.go:141] libmachine: (no-preload-347802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:08.076992   71529 main.go:141] libmachine: (no-preload-347802) DBG | About to run SSH command:
	I0910 18:59:08.077029   71529 main.go:141] libmachine: (no-preload-347802) DBG | exit 0
	I0910 18:59:08.201181   71529 main.go:141] libmachine: (no-preload-347802) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:08.201561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetConfigRaw
	I0910 18:59:08.202195   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.204390   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204639   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.204676   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204932   71529 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/config.json ...
	I0910 18:59:08.205227   71529 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:08.205245   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:08.205464   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.207451   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207833   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.207862   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207956   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.208120   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208402   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.208584   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.208811   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.208826   71529 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:08.317392   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:08.317421   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317693   71529 buildroot.go:166] provisioning hostname "no-preload-347802"
	I0910 18:59:08.317721   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317870   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.320440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320749   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.320777   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320922   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.321092   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321295   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.321607   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.321764   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.321778   71529 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-347802 && echo "no-preload-347802" | sudo tee /etc/hostname
	I0910 18:59:08.442907   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-347802
	
	I0910 18:59:08.442932   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.445449   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445743   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.445769   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445930   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.446135   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446308   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446461   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.446642   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.446831   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.446853   71529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-347802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-347802/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-347802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:08.561710   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:08.561738   71529 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:08.561760   71529 buildroot.go:174] setting up certificates
	I0910 18:59:08.561771   71529 provision.go:84] configureAuth start
	I0910 18:59:08.561782   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.562065   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.564917   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565296   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.565318   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565468   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.567579   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567883   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.567909   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567998   71529 provision.go:143] copyHostCerts
	I0910 18:59:08.568062   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:08.568074   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:08.568155   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:08.568259   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:08.568269   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:08.568297   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:08.568362   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:08.568369   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:08.568398   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:08.568457   71529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.no-preload-347802 san=[127.0.0.1 192.168.50.138 localhost minikube no-preload-347802]
	I0910 18:59:08.635212   71529 provision.go:177] copyRemoteCerts
	I0910 18:59:08.635296   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:08.635321   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.637851   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638202   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.638227   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638392   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.638561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.638727   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.638850   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:08.723477   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:08.747854   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 18:59:08.770184   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:08.792105   71529 provision.go:87] duration metric: took 230.324534ms to configureAuth
	I0910 18:59:08.792125   71529 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:08.792306   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:08.792389   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.795139   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795414   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.795440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795580   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.795767   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.795931   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.796075   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.796201   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.796385   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.796404   71529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:09.021498   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:09.021530   71529 machine.go:96] duration metric: took 816.290576ms to provisionDockerMachine
	I0910 18:59:09.021540   71529 start.go:293] postStartSetup for "no-preload-347802" (driver="kvm2")
	I0910 18:59:09.021566   71529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:09.021587   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.021923   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:09.021951   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.024598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.024935   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.024965   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.025210   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.025416   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.025598   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.025747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.107986   71529 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:09.111947   71529 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:09.111967   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:09.112028   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:09.112098   71529 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:09.112184   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:09.121734   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:09.144116   71529 start.go:296] duration metric: took 122.562738ms for postStartSetup
	I0910 18:59:09.144159   71529 fix.go:56] duration metric: took 19.386851685s for fixHost
	I0910 18:59:09.144183   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.146816   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147237   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.147278   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147396   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.147583   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147754   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147886   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.148060   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:09.148274   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:09.148285   71529 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:09.257433   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994749.232014074
	
	I0910 18:59:09.257456   71529 fix.go:216] guest clock: 1725994749.232014074
	I0910 18:59:09.257463   71529 fix.go:229] Guest: 2024-09-10 18:59:09.232014074 +0000 UTC Remote: 2024-09-10 18:59:09.144164668 +0000 UTC m=+277.006797443 (delta=87.849406ms)
	I0910 18:59:09.257478   71529 fix.go:200] guest clock delta is within tolerance: 87.849406ms
	I0910 18:59:09.257491   71529 start.go:83] releasing machines lock for "no-preload-347802", held for 19.50021281s
	I0910 18:59:09.257522   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.257777   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:09.260357   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260690   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.260715   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260895   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261369   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261545   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261631   71529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:09.261681   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.261749   71529 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:09.261774   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.264296   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264630   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.264650   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264907   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.264992   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.265020   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.265067   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265189   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.265266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265342   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265400   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.265470   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265602   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.367236   71529 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:09.373255   71529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:09.513271   71529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:09.519091   71529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:09.519153   71529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:09.534617   71529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:09.534639   71529 start.go:495] detecting cgroup driver to use...
	I0910 18:59:09.534698   71529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:09.551186   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:09.565123   71529 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:09.565193   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:09.578892   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:09.592571   71529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:09.700953   71529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:09.831175   71529 docker.go:233] disabling docker service ...
	I0910 18:59:09.831245   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:09.845755   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:09.858961   71529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:10.008707   71529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:10.144588   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:10.158486   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:10.176399   71529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:10.176456   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.186448   71529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:10.186511   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.196600   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.206639   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.216913   71529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:10.227030   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.237962   71529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.255181   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.265618   71529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:10.275659   71529 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:10.275713   71529 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:10.288712   71529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:10.301886   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:10.415847   71529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:10.500738   71529 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:10.500829   71529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:10.506564   71529 start.go:563] Will wait 60s for crictl version
	I0910 18:59:10.506620   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.510639   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:10.553929   71529 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:10.554034   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.582508   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.622516   71529 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:09.282182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Start
	I0910 18:59:09.282345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring networks are active...
	I0910 18:59:09.282958   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network default is active
	I0910 18:59:09.283450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network mk-default-k8s-diff-port-557504 is active
	I0910 18:59:09.283810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Getting domain xml...
	I0910 18:59:09.284454   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Creating domain...
	I0910 18:59:10.513168   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting to get IP...
	I0910 18:59:10.514173   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514681   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.514587   72843 retry.go:31] will retry after 228.672382ms: waiting for machine to come up
	I0910 18:59:10.745046   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745508   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.745440   72843 retry.go:31] will retry after 329.196616ms: waiting for machine to come up
	I0910 18:59:11.075777   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076237   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076269   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.076188   72843 retry.go:31] will retry after 317.98463ms: waiting for machine to come up
	I0910 18:59:10.623864   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:10.626709   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627042   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:10.627084   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627336   71529 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:10.631579   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:10.644077   71529 kubeadm.go:883] updating cluster {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:10.644183   71529 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:10.644215   71529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:10.679225   71529 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:10.679247   71529 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:10.679332   71529 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.679346   71529 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.679384   71529 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0910 18:59:10.679395   71529 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.679472   71529 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.679336   71529 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.681147   71529 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.681183   71529 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.681196   71529 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.681189   71529 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.681232   71529 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.681304   71529 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.841312   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.848638   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.872351   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.875581   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.882457   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.894360   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0910 18:59:10.895305   71529 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0910 18:59:10.895341   71529 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.895379   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.898460   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.953614   71529 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0910 18:59:10.953659   71529 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.953706   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042770   71529 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0910 18:59:11.042837   71529 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0910 18:59:11.042862   71529 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.042873   71529 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042820   71529 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0910 18:59:11.043065   71529 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.043097   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.129993   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.130090   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.130018   71529 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0910 18:59:11.130143   71529 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.130187   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.130189   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.130206   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.130271   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.239573   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.239626   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.241780   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.241795   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.241853   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.241883   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.360008   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.360027   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.360067   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.371623   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.480504   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0910 18:59:11.480591   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.480615   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.480635   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0910 18:59:11.480725   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:11.488248   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.510860   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0910 18:59:11.510950   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0910 18:59:11.510959   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:11.511032   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:11.514065   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0910 18:59:11.514136   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:11.555358   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0910 18:59:11.555425   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0910 18:59:11.555445   71529 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555465   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:11.555491   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555497   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0910 18:59:11.578210   71529 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0910 18:59:11.578227   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0910 18:59:11.578258   71529 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.578273   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0910 18:59:11.578306   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.578345   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0910 18:59:11.578310   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0910 18:59:11.395907   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396361   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396389   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.396320   72843 retry.go:31] will retry after 511.273215ms: waiting for machine to come up
	I0910 18:59:11.909582   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910012   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910041   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.909957   72843 retry.go:31] will retry after 712.801984ms: waiting for machine to come up
	I0910 18:59:12.624608   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625042   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625083   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:12.625014   72843 retry.go:31] will retry after 873.57855ms: waiting for machine to come up
	I0910 18:59:13.499767   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500117   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500144   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:13.500071   72843 retry.go:31] will retry after 1.180667971s: waiting for machine to come up
	I0910 18:59:14.682848   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683351   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683381   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:14.683297   72843 retry.go:31] will retry after 1.211684184s: waiting for machine to come up
	I0910 18:59:15.896172   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896651   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:15.896597   72843 retry.go:31] will retry after 1.541313035s: waiting for machine to come up
	I0910 18:59:13.534642   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978971061s)
	I0910 18:59:13.534680   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0910 18:59:13.534686   71529 ssh_runner.go:235] Completed: which crictl: (1.956359959s)
	I0910 18:59:13.534704   71529 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.534753   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:13.534754   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.580670   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.439293   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439652   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:17.439607   72843 retry.go:31] will retry after 2.232253017s: waiting for machine to come up
	I0910 18:59:19.673727   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:19.674070   72843 retry.go:31] will retry after 2.324233118s: waiting for machine to come up
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.644871938s)
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690724664s)
	I0910 18:59:17.225647   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0910 18:59:17.225671   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.225676   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:17.225702   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:19.705947   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.48021773s)
	I0910 18:59:19.705982   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0910 18:59:19.706006   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706045   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.480359026s)
	I0910 18:59:19.706069   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706098   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 18:59:19.706176   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:21.666588   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960494926s)
	I0910 18:59:21.666623   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0910 18:59:21.666640   71529 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.960446302s)
	I0910 18:59:21.666648   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:21.666666   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0910 18:59:21.666699   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:22.000591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001014   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001047   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:22.000951   72843 retry.go:31] will retry after 3.327224401s: waiting for machine to come up
	I0910 18:59:25.329967   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330414   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330445   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:25.330367   72843 retry.go:31] will retry after 3.45596573s: waiting for machine to come up
	I0910 18:59:23.216195   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.549470753s)
	I0910 18:59:23.216223   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0910 18:59:23.216243   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:23.216286   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:25.077483   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.861176975s)
	I0910 18:59:25.077515   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0910 18:59:25.077547   71529 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.077640   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.919427   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0910 18:59:25.919478   71529 cache_images.go:123] Successfully loaded all cached images
	I0910 18:59:25.919486   71529 cache_images.go:92] duration metric: took 15.240223152s to LoadCachedImages
	I0910 18:59:25.919502   71529 kubeadm.go:934] updating node { 192.168.50.138 8443 v1.31.0 crio true true} ...
	I0910 18:59:25.919622   71529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-347802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:25.919710   71529 ssh_runner.go:195] Run: crio config
	I0910 18:59:25.964461   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:25.964489   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:25.964509   71529 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:25.964535   71529 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-347802 NodeName:no-preload-347802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:25.964698   71529 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-347802"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:25.964780   71529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:25.975304   71529 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:25.975371   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:25.985124   71529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 18:59:26.003355   71529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:26.020117   71529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0910 18:59:26.037026   71529 ssh_runner.go:195] Run: grep 192.168.50.138	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:26.041140   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:26.053643   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:26.175281   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:26.193153   71529 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802 for IP: 192.168.50.138
	I0910 18:59:26.193181   71529 certs.go:194] generating shared ca certs ...
	I0910 18:59:26.193203   71529 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:26.193398   71529 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:26.193452   71529 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:26.193466   71529 certs.go:256] generating profile certs ...
	I0910 18:59:26.193582   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/client.key
	I0910 18:59:26.193664   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key.93ff3787
	I0910 18:59:26.193722   71529 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key
	I0910 18:59:26.193871   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:26.193924   71529 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:26.193978   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:26.194026   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:26.194053   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:26.194083   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:26.194132   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:26.194868   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:26.231957   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:26.280213   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:26.310722   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:26.347855   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:59:26.386495   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:26.411742   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:26.435728   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:59:26.460305   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:26.484974   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:26.508782   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:26.531397   71529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:26.548219   71529 ssh_runner.go:195] Run: openssl version
	I0910 18:59:26.553969   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:26.564950   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569539   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569594   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.575677   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:26.586342   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:26.606946   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611671   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611720   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.617271   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:26.627833   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:26.638225   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642722   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642759   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.648359   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:26.659003   71529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:26.663236   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:26.668896   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:26.674346   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:26.680028   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:26.685462   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:26.691097   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:26.696620   71529 kubeadm.go:392] StartCluster: {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:26.696704   71529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:26.696746   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.733823   71529 cri.go:89] found id: ""
	I0910 18:59:26.733883   71529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:26.744565   71529 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:26.744584   71529 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:26.744620   71529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:26.754754   71529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:26.755687   71529 kubeconfig.go:125] found "no-preload-347802" server: "https://192.168.50.138:8443"
	I0910 18:59:26.757732   71529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:26.767140   71529 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.138
	I0910 18:59:26.767167   71529 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:26.767180   71529 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:26.767235   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.805555   71529 cri.go:89] found id: ""
	I0910 18:59:26.805616   71529 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:26.822806   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:26.832434   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:26.832456   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:26.832499   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:26.841225   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:26.841288   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:26.850145   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:26.859016   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:26.859070   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:26.868806   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.877814   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:26.877867   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.886985   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:26.895859   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:26.895911   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:26.905600   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:26.915716   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:27.038963   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:30.202285   72122 start.go:364] duration metric: took 3m27.611616445s to acquireMachinesLock for "old-k8s-version-432422"
	I0910 18:59:30.202346   72122 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:30.202377   72122 fix.go:54] fixHost starting: 
	I0910 18:59:30.202807   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:30.202842   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:30.222440   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0910 18:59:30.222927   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:30.223415   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:59:30.223435   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:30.223748   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:30.223905   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:30.224034   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetState
	I0910 18:59:30.225464   72122 fix.go:112] recreateIfNeeded on old-k8s-version-432422: state=Stopped err=<nil>
	I0910 18:59:30.225505   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	W0910 18:59:30.225655   72122 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:30.227698   72122 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-432422" ...
	I0910 18:59:28.790020   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790390   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Found IP for machine: 192.168.72.54
	I0910 18:59:28.790424   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has current primary IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790435   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserving static IP address...
	I0910 18:59:28.790758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserved static IP address: 192.168.72.54
	I0910 18:59:28.790780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for SSH to be available...
	I0910 18:59:28.790811   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.790839   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | skip adding static IP to network mk-default-k8s-diff-port-557504 - found existing host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"}
	I0910 18:59:28.790856   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Getting to WaitForSSH function...
	I0910 18:59:28.792644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.792947   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.792978   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.793114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH client type: external
	I0910 18:59:28.793135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa (-rw-------)
	I0910 18:59:28.793192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:28.793242   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | About to run SSH command:
	I0910 18:59:28.793272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | exit 0
	I0910 18:59:28.921644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:28.921983   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetConfigRaw
	I0910 18:59:28.922663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:28.925273   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925614   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.925639   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925884   71627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/config.json ...
	I0910 18:59:28.926061   71627 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:28.926077   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:28.926272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:28.928411   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928731   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.928758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928909   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:28.929096   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929249   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929371   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:28.929552   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:28.929722   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:28.929732   71627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:29.041454   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:29.041486   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041745   71627 buildroot.go:166] provisioning hostname "default-k8s-diff-port-557504"
	I0910 18:59:29.041766   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041965   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.044784   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.045182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045358   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.045528   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045705   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.045968   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.046158   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.046173   71627 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-557504 && echo "default-k8s-diff-port-557504" | sudo tee /etc/hostname
	I0910 18:59:29.180227   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-557504
	
	I0910 18:59:29.180257   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.182815   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183166   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.183200   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183416   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.183612   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183779   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183883   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.184053   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.184258   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.184276   71627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-557504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-557504/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-557504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:29.315908   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:29.315942   71627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:29.315981   71627 buildroot.go:174] setting up certificates
	I0910 18:59:29.315996   71627 provision.go:84] configureAuth start
	I0910 18:59:29.316013   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.316262   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:29.319207   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319580   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.319609   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.321973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322318   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.322352   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322499   71627 provision.go:143] copyHostCerts
	I0910 18:59:29.322564   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:29.322577   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:29.322647   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:29.322772   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:29.322786   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:29.322832   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:29.322938   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:29.322951   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:29.322986   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:29.323065   71627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-557504 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-557504 localhost minikube]
	I0910 18:59:29.488131   71627 provision.go:177] copyRemoteCerts
	I0910 18:59:29.488187   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:29.488210   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.491095   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491441   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.491467   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491666   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.491830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.491973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.492123   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:29.584016   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:29.614749   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0910 18:59:29.646904   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:29.677788   71627 provision.go:87] duration metric: took 361.777725ms to configureAuth
	I0910 18:59:29.677820   71627 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:29.678048   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:29.678135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.680932   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681372   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.681394   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681674   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.681868   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682175   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.682431   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.682638   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.682665   71627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:29.934027   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:29.934058   71627 machine.go:96] duration metric: took 1.007985288s to provisionDockerMachine
	I0910 18:59:29.934071   71627 start.go:293] postStartSetup for "default-k8s-diff-port-557504" (driver="kvm2")
	I0910 18:59:29.934084   71627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:29.934104   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:29.934415   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:29.934447   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.937552   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.937917   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.937948   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.938110   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.938315   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.938496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.938645   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.030842   71627 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:30.036158   71627 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:30.036180   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:30.036267   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:30.036380   71627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:30.036520   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:30.048860   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:30.075362   71627 start.go:296] duration metric: took 141.276186ms for postStartSetup
	I0910 18:59:30.075398   71627 fix.go:56] duration metric: took 20.817735357s for fixHost
	I0910 18:59:30.075421   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.078501   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.078996   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.079026   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.079195   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.079373   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079561   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079704   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.079908   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:30.080089   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:30.080102   71627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:30.202112   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994770.178719125
	
	I0910 18:59:30.202139   71627 fix.go:216] guest clock: 1725994770.178719125
	I0910 18:59:30.202149   71627 fix.go:229] Guest: 2024-09-10 18:59:30.178719125 +0000 UTC Remote: 2024-09-10 18:59:30.075402937 +0000 UTC m=+288.723404352 (delta=103.316188ms)
	I0910 18:59:30.202175   71627 fix.go:200] guest clock delta is within tolerance: 103.316188ms
	I0910 18:59:30.202184   71627 start.go:83] releasing machines lock for "default-k8s-diff-port-557504", held for 20.944552577s
	I0910 18:59:30.202221   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.202522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:30.205728   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206068   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.206101   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206267   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.206830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207100   71627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:30.207171   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.207378   71627 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:30.207399   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.209851   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210130   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210220   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210400   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210553   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210555   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210625   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210735   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210785   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.210849   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210949   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.211002   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.211132   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.317738   71627 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:30.325333   71627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:30.485483   71627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:30.492979   71627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:30.493064   71627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:30.518974   71627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:30.518998   71627 start.go:495] detecting cgroup driver to use...
	I0910 18:59:30.519192   71627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:30.539578   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:30.554986   71627 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:30.555045   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:30.570454   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:30.590125   71627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:30.738819   71627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:30.930750   71627 docker.go:233] disabling docker service ...
	I0910 18:59:30.930811   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:30.946226   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:30.961633   71627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:31.086069   71627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:31.208629   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:31.225988   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:31.248059   71627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:31.248127   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.260212   71627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:31.260296   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.271128   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.282002   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.296901   71627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:31.309739   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.325469   71627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.350404   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.366130   71627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:31.379206   71627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:31.379259   71627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:31.395015   71627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:31.406339   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:31.538783   71627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:31.656815   71627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:31.656886   71627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:31.665263   71627 start.go:563] Will wait 60s for crictl version
	I0910 18:59:31.665333   71627 ssh_runner.go:195] Run: which crictl
	I0910 18:59:31.670317   71627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:31.719549   71627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:31.719641   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.753801   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.787092   71627 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:28.257536   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.218537615s)
	I0910 18:59:28.257562   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.451173   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.516432   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.605746   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:28.605823   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.106870   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.606340   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.623814   71529 api_server.go:72] duration metric: took 1.018071553s to wait for apiserver process to appear ...
	I0910 18:59:29.623842   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:29.623864   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:29.624282   71529 api_server.go:269] stopped: https://192.168.50.138:8443/healthz: Get "https://192.168.50.138:8443/healthz": dial tcp 192.168.50.138:8443: connect: connection refused
	I0910 18:59:30.124145   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:30.228896   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .Start
	I0910 18:59:30.229066   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring networks are active...
	I0910 18:59:30.229735   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network default is active
	I0910 18:59:30.230126   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network mk-old-k8s-version-432422 is active
	I0910 18:59:30.230559   72122 main.go:141] libmachine: (old-k8s-version-432422) Getting domain xml...
	I0910 18:59:30.231206   72122 main.go:141] libmachine: (old-k8s-version-432422) Creating domain...
	I0910 18:59:31.669616   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting to get IP...
	I0910 18:59:31.670682   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.671124   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.671225   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.671101   72995 retry.go:31] will retry after 285.109621ms: waiting for machine to come up
	I0910 18:59:31.957711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.958140   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.958169   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.958103   72995 retry.go:31] will retry after 306.703176ms: waiting for machine to come up
	I0910 18:59:32.266797   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.267299   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.267333   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.267226   72995 retry.go:31] will retry after 327.953362ms: waiting for machine to come up
	I0910 18:59:32.494151   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.494177   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.494193   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.550283   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.550317   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.624486   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.646548   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:32.646583   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.124697   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.139775   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.139814   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.623998   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.632392   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.632430   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:34.123979   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:34.133552   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 18:59:34.143511   71529 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:34.143543   71529 api_server.go:131] duration metric: took 4.519693435s to wait for apiserver health ...
	I0910 18:59:34.143552   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:34.143558   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:34.145562   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:31.788472   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:31.791698   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792063   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:31.792102   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792342   71627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:31.798045   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:31.814552   71627 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:31.814718   71627 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:31.814775   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:31.863576   71627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:31.863655   71627 ssh_runner.go:195] Run: which lz4
	I0910 18:59:31.868776   71627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:31.874162   71627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:31.874194   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 18:59:33.358271   71627 crio.go:462] duration metric: took 1.489531006s to copy over tarball
	I0910 18:59:33.358356   71627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:35.759805   71627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.401424942s)
	I0910 18:59:35.759833   71627 crio.go:469] duration metric: took 2.401529016s to extract the tarball
	I0910 18:59:35.759842   71627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:35.797349   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:35.849544   71627 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:59:35.849571   71627 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:59:35.849583   71627 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.0 crio true true} ...
	I0910 18:59:35.849706   71627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-557504 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:35.849783   71627 ssh_runner.go:195] Run: crio config
	I0910 18:59:35.896486   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:35.896514   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:35.896534   71627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:35.896556   71627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-557504 NodeName:default-k8s-diff-port-557504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:35.896707   71627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-557504"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:35.896777   71627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:35.907249   71627 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:35.907337   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:35.917196   71627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0910 18:59:35.935072   71627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:35.953823   71627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0910 18:59:35.970728   71627 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:35.974648   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:35.986487   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:36.144443   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:36.164942   71627 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504 for IP: 192.168.72.54
	I0910 18:59:36.164972   71627 certs.go:194] generating shared ca certs ...
	I0910 18:59:36.164990   71627 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:36.165172   71627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:36.165242   71627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:36.165255   71627 certs.go:256] generating profile certs ...
	I0910 18:59:36.165382   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/client.key
	I0910 18:59:36.165460   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key.5cc31a18
	I0910 18:59:36.165505   71627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key
	I0910 18:59:36.165640   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:36.165680   71627 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:36.165700   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:36.165733   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:36.165770   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:36.165803   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:36.165874   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:36.166687   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:36.203302   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:36.230599   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:36.269735   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:36.311674   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0910 18:59:36.354614   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:59:36.379082   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:34.146903   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:34.163037   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:34.189830   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:34.200702   71529 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:34.200751   71529 system_pods.go:61] "coredns-6f6b679f8f-54rpl" [2e301d43-a54a-4836-abf8-a45f5bc15889] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:34.200762   71529 system_pods.go:61] "etcd-no-preload-347802" [0fdffb97-72c6-4588-9593-46bcbed0a9fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:34.200773   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [3cf5abac-1d94-4ee2-a962-9daad308ec8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:34.200782   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6769757d-57fd-46c8-8f78-d20f80e592d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:34.200788   71529 system_pods.go:61] "kube-proxy-7v9n8" [d01842ad-3dae-49e1-8570-db9bcf4d0afc] Running
	I0910 18:59:34.200797   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [20e59c6b-4387-4dd0-b242-78d107775275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:34.200804   71529 system_pods.go:61] "metrics-server-6867b74b74-w8rqv" [52535081-4503-4136-963d-6b2db6c0224e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:34.200809   71529 system_pods.go:61] "storage-provisioner" [9f7c0178-7194-4c73-95a4-5a3c0091f3ac] Running
	I0910 18:59:34.200816   71529 system_pods.go:74] duration metric: took 10.965409ms to wait for pod list to return data ...
	I0910 18:59:34.200857   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:34.204544   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:34.204568   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:34.204580   71529 node_conditions.go:105] duration metric: took 3.714534ms to run NodePressure ...
	I0910 18:59:34.204597   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:34.487106   71529 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491817   71529 kubeadm.go:739] kubelet initialised
	I0910 18:59:34.491838   71529 kubeadm.go:740] duration metric: took 4.708046ms waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491845   71529 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:34.496604   71529 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.501535   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501553   71529 pod_ready.go:82] duration metric: took 4.927724ms for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.501561   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501567   71529 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.505473   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505491   71529 pod_ready.go:82] duration metric: took 3.917111ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.505499   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505507   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.510025   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510043   71529 pod_ready.go:82] duration metric: took 4.522609ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.510050   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510056   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:36.519023   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:32.597017   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.597589   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.597616   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.597554   72995 retry.go:31] will retry after 448.654363ms: waiting for machine to come up
	I0910 18:59:33.048100   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.048559   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.048590   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.048478   72995 retry.go:31] will retry after 654.829574ms: waiting for machine to come up
	I0910 18:59:33.704902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.705446   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.705475   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.705363   72995 retry.go:31] will retry after 610.514078ms: waiting for machine to come up
	I0910 18:59:34.316978   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:34.317481   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:34.317503   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:34.317430   72995 retry.go:31] will retry after 1.125805817s: waiting for machine to come up
	I0910 18:59:35.444880   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:35.445369   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:35.445394   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:35.445312   72995 retry.go:31] will retry after 1.484426931s: waiting for machine to come up
	I0910 18:59:36.931028   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:36.931568   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:36.931613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:36.931524   72995 retry.go:31] will retry after 1.819998768s: waiting for machine to come up
	I0910 18:59:36.403353   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:36.427345   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:36.452765   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:36.485795   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:36.512944   71627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:36.532454   71627 ssh_runner.go:195] Run: openssl version
	I0910 18:59:36.538449   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:36.550806   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555761   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555819   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.562430   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:36.573730   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:36.584987   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589551   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589615   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.595496   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:36.607821   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:36.620298   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624888   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624939   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.630534   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:36.641657   71627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:36.646317   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:36.652748   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:36.661166   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:36.670240   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:36.676776   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:36.686442   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:36.693233   71627 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:36.693351   71627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:36.693414   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.743159   71627 cri.go:89] found id: ""
	I0910 18:59:36.743256   71627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:36.754428   71627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:36.754451   71627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:36.754505   71627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:36.765126   71627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:36.766213   71627 kubeconfig.go:125] found "default-k8s-diff-port-557504" server: "https://192.168.72.54:8444"
	I0910 18:59:36.768428   71627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:36.778678   71627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I0910 18:59:36.778715   71627 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:36.778728   71627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:36.778779   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.824031   71627 cri.go:89] found id: ""
	I0910 18:59:36.824107   71627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:36.840585   71627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:36.851445   71627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:36.851462   71627 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:36.851508   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0910 18:59:36.860630   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:36.860682   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:36.869973   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0910 18:59:36.880034   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:36.880099   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:36.889684   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.898786   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:36.898870   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.908328   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0910 18:59:36.917272   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:36.917334   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:36.928923   71627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:36.940238   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.079143   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.945317   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.157807   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.245283   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.353653   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:38.353746   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:38.854791   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.354743   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.409511   71627 api_server.go:72] duration metric: took 1.055855393s to wait for apiserver process to appear ...
	I0910 18:59:39.409543   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:39.409566   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.410104   71627 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I0910 18:59:39.909665   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.018802   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:41.517911   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:38.753463   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:38.754076   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:38.754107   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:38.754019   72995 retry.go:31] will retry after 2.258214375s: waiting for machine to come up
	I0910 18:59:41.013524   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:41.013988   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:41.014011   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:41.013910   72995 retry.go:31] will retry after 2.030553777s: waiting for machine to come up
	I0910 18:59:41.976133   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:41.976166   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:41.976179   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.080631   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.080674   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.409865   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.421093   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.421174   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.910272   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.914729   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.914757   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:43.410280   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:43.414731   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 18:59:43.421135   71627 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:43.421163   71627 api_server.go:131] duration metric: took 4.011612782s to wait for apiserver health ...
	I0910 18:59:43.421172   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:43.421178   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:43.423063   71627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:43.424278   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:43.434823   71627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:43.461604   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:43.477566   71627 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:43.477592   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:43.477600   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:43.477606   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:43.477616   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:43.477623   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 18:59:43.477631   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:43.477638   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:43.477648   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 18:59:43.477658   71627 system_pods.go:74] duration metric: took 16.035701ms to wait for pod list to return data ...
	I0910 18:59:43.477673   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:43.485818   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:43.485840   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:43.485850   71627 node_conditions.go:105] duration metric: took 8.173642ms to run NodePressure ...
	I0910 18:59:43.485864   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:43.752422   71627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756713   71627 kubeadm.go:739] kubelet initialised
	I0910 18:59:43.756735   71627 kubeadm.go:740] duration metric: took 4.285787ms waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756744   71627 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:43.762384   71627 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.767080   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767099   71627 pod_ready.go:82] duration metric: took 4.695864ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.767109   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767116   71627 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.772560   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772579   71627 pod_ready.go:82] duration metric: took 5.453737ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.772588   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772593   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.776328   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776345   71627 pod_ready.go:82] duration metric: took 3.745149ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.776352   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776357   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.865825   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865850   71627 pod_ready.go:82] duration metric: took 89.48636ms for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.865862   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865868   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.264892   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264922   71627 pod_ready.go:82] duration metric: took 399.047611ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.264932   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264938   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.665376   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665402   71627 pod_ready.go:82] duration metric: took 400.457184ms for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.665413   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665418   71627 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:45.065696   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065724   71627 pod_ready.go:82] duration metric: took 400.298527ms for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:45.065736   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065743   71627 pod_ready.go:39] duration metric: took 1.308988307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:45.065759   71627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:59:45.077813   71627 ops.go:34] apiserver oom_adj: -16
	I0910 18:59:45.077838   71627 kubeadm.go:597] duration metric: took 8.323378955s to restartPrimaryControlPlane
	I0910 18:59:45.077846   71627 kubeadm.go:394] duration metric: took 8.384626167s to StartCluster
	I0910 18:59:45.077860   71627 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.077980   71627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:45.079979   71627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.080304   71627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 18:59:45.080399   71627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 18:59:45.080478   71627 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080510   71627 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080506   71627 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-557504"
	W0910 18:59:45.080523   71627 addons.go:243] addon storage-provisioner should already be in state true
	I0910 18:59:45.080519   71627 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080553   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080568   71627 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080568   71627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-557504"
	W0910 18:59:45.080582   71627 addons.go:243] addon metrics-server should already be in state true
	I0910 18:59:45.080529   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:45.080608   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080906   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080932   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.080989   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080994   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081905   71627 out.go:177] * Verifying Kubernetes components...
	I0910 18:59:45.083206   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:45.096019   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0910 18:59:45.096288   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0910 18:59:45.096453   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096730   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096984   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097012   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097243   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097273   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097401   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.097596   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.097678   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0910 18:59:45.097693   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.098049   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.098464   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.098504   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.099185   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.099207   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.099592   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.100125   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.100166   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.101159   71627 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-557504"
	W0910 18:59:45.101175   71627 addons.go:243] addon default-storageclass should already be in state true
	I0910 18:59:45.101203   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.101501   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.101537   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.114823   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0910 18:59:45.115253   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.115363   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0910 18:59:45.115737   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.115759   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.115795   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.116106   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.116244   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.116270   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.116289   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.116696   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.117290   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.117327   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.117546   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0910 18:59:45.117879   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.118496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.118631   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.118643   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.118949   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.119107   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.120353   71627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 18:59:45.120775   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.121685   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 18:59:45.121699   71627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 18:59:45.121718   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.122500   71627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:45.123762   71627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.123778   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:59:45.123792   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.125345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.125926   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.126161   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.126357   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.125943   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.126548   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.126661   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.127075   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127507   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.127522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127675   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.127810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.127905   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.127997   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.132978   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0910 18:59:45.133303   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.133757   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.133779   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.134043   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.134188   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.135712   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.135917   71627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.135928   71627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:59:45.135938   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.138375   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138616   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.138629   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138768   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.138937   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.139054   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.139181   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.293036   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:45.311747   71627 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:45.425820   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 18:59:45.425852   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 18:59:45.430783   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.441452   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.481245   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 18:59:45.481268   71627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 18:59:45.573348   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:45.573373   71627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 18:59:45.634830   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:46.589194   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147713188s)
	I0910 18:59:46.589253   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589266   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589284   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589311   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589321   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158508631s)
	I0910 18:59:46.589343   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589355   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589723   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589729   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589730   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589736   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589738   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589741   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589751   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589752   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589761   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589774   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589816   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589755   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589852   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589961   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589971   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.590192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.590207   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.590220   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591675   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.591692   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591702   71627 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:46.595906   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.595921   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.596105   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.596126   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.598033   71627 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0910 18:59:44.023282   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:46.516768   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:47.016400   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.016423   71529 pod_ready.go:82] duration metric: took 12.506359172s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.016435   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020809   71529 pod_ready.go:93] pod "kube-proxy-7v9n8" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.020827   71529 pod_ready.go:82] duration metric: took 4.386051ms for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020836   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.046937   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:43.047363   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:43.047393   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:43.047314   72995 retry.go:31] will retry after 2.233047134s: waiting for machine to come up
	I0910 18:59:45.282610   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:45.283104   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:45.283133   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:45.283026   72995 retry.go:31] will retry after 4.238676711s: waiting for machine to come up
	I0910 18:59:51.182133   71183 start.go:364] duration metric: took 56.422548201s to acquireMachinesLock for "embed-certs-836868"
	I0910 18:59:51.182195   71183 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:51.182206   71183 fix.go:54] fixHost starting: 
	I0910 18:59:51.182600   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:51.182637   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:51.198943   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0910 18:59:51.199345   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:51.199803   71183 main.go:141] libmachine: Using API Version  1
	I0910 18:59:51.199828   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:51.200153   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:51.200364   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 18:59:51.200493   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 18:59:51.202100   71183 fix.go:112] recreateIfNeeded on embed-certs-836868: state=Stopped err=<nil>
	I0910 18:59:51.202123   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	W0910 18:59:51.202286   71183 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:51.204028   71183 out.go:177] * Restarting existing kvm2 VM for "embed-certs-836868" ...
	I0910 18:59:46.599125   71627 addons.go:510] duration metric: took 1.518742666s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0910 18:59:47.316003   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.316691   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.027374   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:49.027393   71529 pod_ready.go:82] duration metric: took 2.006551523s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:49.027403   71529 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:51.034568   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:51.205180   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Start
	I0910 18:59:51.205332   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring networks are active...
	I0910 18:59:51.205952   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network default is active
	I0910 18:59:51.206322   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network mk-embed-certs-836868 is active
	I0910 18:59:51.206717   71183 main.go:141] libmachine: (embed-certs-836868) Getting domain xml...
	I0910 18:59:51.207430   71183 main.go:141] libmachine: (embed-certs-836868) Creating domain...
	I0910 18:59:49.526000   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.526536   72122 main.go:141] libmachine: (old-k8s-version-432422) Found IP for machine: 192.168.61.51
	I0910 18:59:49.526558   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserving static IP address...
	I0910 18:59:49.526569   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has current primary IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.527018   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.527063   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | skip adding static IP to network mk-old-k8s-version-432422 - found existing host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"}
	I0910 18:59:49.527084   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserved static IP address: 192.168.61.51
	I0910 18:59:49.527099   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting for SSH to be available...
	I0910 18:59:49.527113   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Getting to WaitForSSH function...
	I0910 18:59:49.529544   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.529962   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.529987   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.530143   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH client type: external
	I0910 18:59:49.530170   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa (-rw-------)
	I0910 18:59:49.530195   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:49.530208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | About to run SSH command:
	I0910 18:59:49.530245   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | exit 0
	I0910 18:59:49.656944   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:49.657307   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:59:49.657926   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:49.660332   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660689   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.660711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660992   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:59:49.661238   72122 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:49.661259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:49.661480   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.663824   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.664236   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664370   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.664565   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664712   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664887   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.665103   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.665392   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.665406   72122 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:49.769433   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:49.769468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769716   72122 buildroot.go:166] provisioning hostname "old-k8s-version-432422"
	I0910 18:59:49.769740   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769918   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.772324   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772710   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.772736   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772875   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.773061   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773245   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773384   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.773554   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.773751   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.773764   72122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-432422 && echo "old-k8s-version-432422" | sudo tee /etc/hostname
	I0910 18:59:49.891230   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-432422
	
	I0910 18:59:49.891259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.894272   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894641   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.894683   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894820   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.894983   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895210   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.895330   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.895540   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.895559   72122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-432422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-432422/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-432422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:50.011767   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:50.011795   72122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:50.011843   72122 buildroot.go:174] setting up certificates
	I0910 18:59:50.011854   72122 provision.go:84] configureAuth start
	I0910 18:59:50.011866   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:50.012185   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:50.014947   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015352   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.015388   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015549   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.017712   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018002   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.018036   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018193   72122 provision.go:143] copyHostCerts
	I0910 18:59:50.018251   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:50.018265   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:50.018337   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:50.018481   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:50.018491   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:50.018513   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:50.018585   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:50.018594   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:50.018612   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:50.018667   72122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-432422 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-432422]
	I0910 18:59:50.528798   72122 provision.go:177] copyRemoteCerts
	I0910 18:59:50.528864   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:50.528900   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.532154   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532576   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.532613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532765   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.532995   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.533205   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.533370   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:50.620169   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0910 18:59:50.647163   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:50.679214   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:50.704333   72122 provision.go:87] duration metric: took 692.46607ms to configureAuth
	I0910 18:59:50.704360   72122 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:50.704545   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:59:50.704639   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.707529   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.707903   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.707931   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.708082   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.708297   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708463   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708641   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.708786   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:50.708954   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:50.708969   72122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:50.935375   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:50.935403   72122 machine.go:96] duration metric: took 1.274152353s to provisionDockerMachine
	I0910 18:59:50.935414   72122 start.go:293] postStartSetup for "old-k8s-version-432422" (driver="kvm2")
	I0910 18:59:50.935424   72122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:50.935448   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:50.935763   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:50.935796   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.938507   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.938865   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.938902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.939008   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.939198   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.939529   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.939689   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.024726   72122 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:51.029522   72122 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:51.029547   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:51.029632   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:51.029734   72122 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:51.029848   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:51.042454   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:51.068748   72122 start.go:296] duration metric: took 133.318275ms for postStartSetup
	I0910 18:59:51.068792   72122 fix.go:56] duration metric: took 20.866428313s for fixHost
	I0910 18:59:51.068816   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.071533   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.071894   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.071921   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.072072   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.072264   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072616   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.072784   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:51.072938   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:51.072948   72122 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:51.181996   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994791.151610055
	
	I0910 18:59:51.182016   72122 fix.go:216] guest clock: 1725994791.151610055
	I0910 18:59:51.182024   72122 fix.go:229] Guest: 2024-09-10 18:59:51.151610055 +0000 UTC Remote: 2024-09-10 18:59:51.068796263 +0000 UTC m=+228.614166738 (delta=82.813792ms)
	I0910 18:59:51.182048   72122 fix.go:200] guest clock delta is within tolerance: 82.813792ms
	I0910 18:59:51.182055   72122 start.go:83] releasing machines lock for "old-k8s-version-432422", held for 20.979733564s
	I0910 18:59:51.182094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.182331   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:51.184857   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185183   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.185212   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185346   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.185840   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186006   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186079   72122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:51.186143   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.186215   72122 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:51.186238   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.189304   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189674   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.189698   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189765   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189879   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190057   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190212   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190230   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.190255   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.190358   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.190470   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190652   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190817   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190948   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.296968   72122 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:51.303144   72122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:51.447027   72122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:51.454963   72122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:51.455032   72122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:51.474857   72122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:51.474882   72122 start.go:495] detecting cgroup driver to use...
	I0910 18:59:51.474957   72122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:51.490457   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:51.504502   72122 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:51.504569   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:51.523331   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:51.543438   72122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:51.678734   72122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:51.831736   72122 docker.go:233] disabling docker service ...
	I0910 18:59:51.831804   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:51.846805   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:51.865771   72122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:52.012922   72122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:52.161595   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:52.180034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:52.200984   72122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0910 18:59:52.201041   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.211927   72122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:52.211989   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.223601   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.234211   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.246209   72122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:52.264079   72122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:52.277144   72122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:52.277204   72122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:52.292683   72122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:52.304601   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:52.421971   72122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:52.544386   72122 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:52.544459   72122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:52.551436   72122 start.go:563] Will wait 60s for crictl version
	I0910 18:59:52.551487   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:52.555614   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:52.598031   72122 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:52.598128   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.629578   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.662403   72122 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0910 18:59:51.815436   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:52.816775   71627 node_ready.go:49] node "default-k8s-diff-port-557504" has status "Ready":"True"
	I0910 18:59:52.816809   71627 node_ready.go:38] duration metric: took 7.505015999s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:52.816821   71627 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:52.823528   71627 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829667   71627 pod_ready.go:93] pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.829688   71627 pod_ready.go:82] duration metric: took 6.135159ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829696   71627 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833912   71627 pod_ready.go:93] pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.833933   71627 pod_ready.go:82] duration metric: took 4.231672ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833942   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838863   71627 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.838883   71627 pod_ready.go:82] duration metric: took 4.934379ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838897   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851413   71627 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:53.851437   71627 pod_ready.go:82] duration metric: took 1.012531075s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851447   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020886   71627 pod_ready.go:93] pod "kube-proxy-4t8r9" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:54.020910   71627 pod_ready.go:82] duration metric: took 169.456474ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020926   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217416   71627 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:55.217440   71627 pod_ready.go:82] duration metric: took 1.196506075s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217451   71627 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.036769   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:55.536544   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:52.544041   71183 main.go:141] libmachine: (embed-certs-836868) Waiting to get IP...
	I0910 18:59:52.545001   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.545522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.545586   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.545494   73202 retry.go:31] will retry after 260.451431ms: waiting for machine to come up
	I0910 18:59:52.807914   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.808351   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.808377   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.808307   73202 retry.go:31] will retry after 340.526757ms: waiting for machine to come up
	I0910 18:59:53.150854   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.151446   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.151476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.151404   73202 retry.go:31] will retry after 470.620322ms: waiting for machine to come up
	I0910 18:59:53.624169   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.624709   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.624747   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.624657   73202 retry.go:31] will retry after 529.186273ms: waiting for machine to come up
	I0910 18:59:54.155156   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.155644   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.155673   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.155599   73202 retry.go:31] will retry after 575.877001ms: waiting for machine to come up
	I0910 18:59:54.733522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.734049   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.734092   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.734000   73202 retry.go:31] will retry after 577.385946ms: waiting for machine to come up
	I0910 18:59:55.312705   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:55.313087   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:55.313114   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:55.313059   73202 retry.go:31] will retry after 735.788809ms: waiting for machine to come up
	I0910 18:59:56.049771   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:56.050272   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:56.050306   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:56.050224   73202 retry.go:31] will retry after 1.433431053s: waiting for machine to come up
	I0910 18:59:52.663465   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:52.666401   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.666796   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:52.666843   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.667002   72122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:52.672338   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:52.688427   72122 kubeadm.go:883] updating cluster {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:52.688559   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:59:52.688623   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:52.740370   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:52.740447   72122 ssh_runner.go:195] Run: which lz4
	I0910 18:59:52.744925   72122 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:52.749840   72122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:52.749872   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0910 18:59:54.437031   72122 crio.go:462] duration metric: took 1.692132914s to copy over tarball
	I0910 18:59:54.437124   72122 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:57.462705   72122 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025545297s)
	I0910 18:59:57.462743   72122 crio.go:469] duration metric: took 3.025690485s to extract the tarball
	I0910 18:59:57.462753   72122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:57.223959   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:59.224657   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:01.224783   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:58.035610   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:00.535779   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:57.485417   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:57.485870   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:57.485896   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:57.485815   73202 retry.go:31] will retry after 1.638565814s: waiting for machine to come up
	I0910 18:59:59.126134   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:59.126625   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:59.126657   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:59.126576   73202 retry.go:31] will retry after 2.127929201s: waiting for machine to come up
	I0910 19:00:01.256121   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:01.256665   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:01.256694   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:01.256612   73202 retry.go:31] will retry after 2.530100505s: waiting for machine to come up
	I0910 18:59:57.508817   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:57.551327   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:57.551350   72122 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:57.551434   72122 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.551704   72122 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.551776   72122 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.552000   72122 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.551807   72122 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.551846   72122 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.551714   72122 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0910 18:59:57.551917   72122 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.553642   72122 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.553660   72122 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.553917   72122 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.553935   72122 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 18:59:57.554014   72122 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.554160   72122 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.554376   72122 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.554662   72122 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.726191   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.742799   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.745264   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.753214   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.768122   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.770828   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 18:59:57.774835   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.807657   72122 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0910 18:59:57.807693   72122 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.807733   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908662   72122 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0910 18:59:57.908678   72122 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0910 18:59:57.908707   72122 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.908711   72122 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.908759   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908760   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920214   72122 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0910 18:59:57.920248   72122 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0910 18:59:57.920258   72122 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.920280   72122 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.920304   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920313   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.937914   72122 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0910 18:59:57.937952   72122 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.937958   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.937999   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.938033   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.938006   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.938073   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.938063   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.938157   72122 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0910 18:59:57.938185   72122 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0910 18:59:57.938215   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:58.044082   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.044139   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.044146   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.044173   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.045813   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.045816   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.045849   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.198804   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.198841   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.198881   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.198944   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.198978   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.199000   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.199081   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.353153   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0910 18:59:58.353217   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0910 18:59:58.353232   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0910 18:59:58.353277   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0910 18:59:58.359353   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.359363   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0910 18:59:58.359421   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.386872   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:58.407734   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0910 18:59:58.425479   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0910 18:59:58.553340   72122 cache_images.go:92] duration metric: took 1.001972084s to LoadCachedImages
	W0910 18:59:58.553438   72122 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0910 18:59:58.553455   72122 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0910 18:59:58.553634   72122 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-432422 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:58.553722   72122 ssh_runner.go:195] Run: crio config
	I0910 18:59:58.605518   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:59:58.605542   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:58.605554   72122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:58.605577   72122 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-432422 NodeName:old-k8s-version-432422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 18:59:58.605744   72122 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-432422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:58.605814   72122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0910 18:59:58.618033   72122 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:58.618096   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:58.629175   72122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0910 18:59:58.653830   72122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:58.679797   72122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0910 18:59:58.698692   72122 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:58.702565   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:58.715128   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:58.858262   72122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:58.876681   72122 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422 for IP: 192.168.61.51
	I0910 18:59:58.876719   72122 certs.go:194] generating shared ca certs ...
	I0910 18:59:58.876740   72122 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:58.876921   72122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:58.876983   72122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:58.876996   72122 certs.go:256] generating profile certs ...
	I0910 18:59:58.877129   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key
	I0910 18:59:58.877210   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b
	I0910 18:59:58.877264   72122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key
	I0910 18:59:58.877424   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:58.877473   72122 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:58.877491   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:58.877528   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:58.877560   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:58.877591   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:58.877648   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:58.878410   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:58.936013   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:58.969736   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:59.017414   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:59.063599   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 18:59:59.093934   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:59.138026   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:59.166507   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:59.196972   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:59.223596   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:59.250627   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:59.279886   72122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:59.300491   72122 ssh_runner.go:195] Run: openssl version
	I0910 18:59:59.306521   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:59.317238   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321625   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321682   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.327532   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:59.339028   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:59.350578   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355025   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355106   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.360701   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:59.375040   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:59.389867   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395829   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395890   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.402425   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:59.414077   72122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:59.418909   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:59.425061   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:59.431213   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:59.437581   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:59.443603   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:59.449820   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:59.456100   72122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:59.456189   72122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:59.456234   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.497167   72122 cri.go:89] found id: ""
	I0910 18:59:59.497227   72122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:59.508449   72122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:59.508474   72122 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:59.508527   72122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:59.521416   72122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:59.522489   72122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:59.523125   72122 kubeconfig.go:62] /home/jenkins/minikube-integration/19598-5973/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-432422" cluster setting kubeconfig missing "old-k8s-version-432422" context setting]
	I0910 18:59:59.524107   72122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:59.637793   72122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:59.651879   72122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0910 18:59:59.651916   72122 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:59.651930   72122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:59.651989   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.691857   72122 cri.go:89] found id: ""
	I0910 18:59:59.691922   72122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:59.708610   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:59.718680   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:59.718702   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:59.718755   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:59.729965   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:59.730028   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:59.740037   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:59.750640   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:59.750706   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:59.762436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.773456   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:59.773522   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.783438   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:59.792996   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:59.793056   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:59.805000   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:59.815384   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:59.955068   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:00.842403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.102530   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.212897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.340128   72122 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:01.340217   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:01.841004   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:02.340913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.225898   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.723882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.034295   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.034431   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.790275   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:03.790710   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:03.790736   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:03.790662   73202 retry.go:31] will retry after 3.202952028s: waiting for machine to come up
	I0910 19:00:06.995302   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:06.996124   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:06.996149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:06.996073   73202 retry.go:31] will retry after 3.076425277s: waiting for machine to come up
	I0910 19:00:02.840935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.340938   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.840669   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.341213   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.841274   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.340698   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.841152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.340425   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.841001   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.341198   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.724121   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.223744   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:07.533428   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:09.534830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.033655   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.075125   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075606   71183 main.go:141] libmachine: (embed-certs-836868) Found IP for machine: 192.168.39.107
	I0910 19:00:10.075634   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has current primary IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075643   71183 main.go:141] libmachine: (embed-certs-836868) Reserving static IP address...
	I0910 19:00:10.076046   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.076075   71183 main.go:141] libmachine: (embed-certs-836868) DBG | skip adding static IP to network mk-embed-certs-836868 - found existing host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"}
	I0910 19:00:10.076103   71183 main.go:141] libmachine: (embed-certs-836868) Reserved static IP address: 192.168.39.107
	I0910 19:00:10.076122   71183 main.go:141] libmachine: (embed-certs-836868) Waiting for SSH to be available...
	I0910 19:00:10.076133   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Getting to WaitForSSH function...
	I0910 19:00:10.078039   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078327   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.078352   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078452   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH client type: external
	I0910 19:00:10.078475   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa (-rw-------)
	I0910 19:00:10.078514   71183 main.go:141] libmachine: (embed-certs-836868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 19:00:10.078527   71183 main.go:141] libmachine: (embed-certs-836868) DBG | About to run SSH command:
	I0910 19:00:10.078548   71183 main.go:141] libmachine: (embed-certs-836868) DBG | exit 0
	I0910 19:00:10.201403   71183 main.go:141] libmachine: (embed-certs-836868) DBG | SSH cmd err, output: <nil>: 
	I0910 19:00:10.201748   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetConfigRaw
	I0910 19:00:10.202405   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.204760   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205130   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.205160   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205408   71183 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/config.json ...
	I0910 19:00:10.205697   71183 machine.go:93] provisionDockerMachine start ...
	I0910 19:00:10.205714   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.205924   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.208095   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208394   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.208418   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208534   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.208712   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208856   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208958   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.209193   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.209412   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.209427   71183 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:00:10.313247   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:00:10.313278   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313556   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 19:00:10.313584   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313765   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.316135   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316569   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.316592   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316739   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.316893   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317046   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317165   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.317288   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.317490   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.317506   71183 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-836868 && echo "embed-certs-836868" | sudo tee /etc/hostname
	I0910 19:00:10.433585   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-836868
	
	I0910 19:00:10.433608   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.436076   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436407   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.436440   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.436826   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.436972   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.437146   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.437314   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.437480   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.437495   71183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-836868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-836868/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-836868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:00:10.546105   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:00:10.546146   71183 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 19:00:10.546186   71183 buildroot.go:174] setting up certificates
	I0910 19:00:10.546197   71183 provision.go:84] configureAuth start
	I0910 19:00:10.546214   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.546485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.549236   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549567   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.549594   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549696   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.551807   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552162   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.552195   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552326   71183 provision.go:143] copyHostCerts
	I0910 19:00:10.552370   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 19:00:10.552380   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 19:00:10.552435   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 19:00:10.552559   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 19:00:10.552568   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 19:00:10.552588   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 19:00:10.552646   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 19:00:10.552653   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 19:00:10.552669   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 19:00:10.552714   71183 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.embed-certs-836868 san=[127.0.0.1 192.168.39.107 embed-certs-836868 localhost minikube]
	I0910 19:00:10.610073   71183 provision.go:177] copyRemoteCerts
	I0910 19:00:10.610132   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:00:10.610153   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.612881   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613264   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.613301   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.613695   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.613863   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.613980   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:10.695479   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 19:00:10.719380   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0910 19:00:10.744099   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:00:10.767849   71183 provision.go:87] duration metric: took 221.638443ms to configureAuth
	I0910 19:00:10.767873   71183 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:00:10.768065   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:10.768150   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.770831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.771178   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771338   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.771539   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771702   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771825   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.771952   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.772106   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.772120   71183 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 19:00:10.992528   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 19:00:10.992568   71183 machine.go:96] duration metric: took 786.857321ms to provisionDockerMachine
	I0910 19:00:10.992583   71183 start.go:293] postStartSetup for "embed-certs-836868" (driver="kvm2")
	I0910 19:00:10.992598   71183 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:00:10.992630   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.992999   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:00:10.993030   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.995361   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995745   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.995777   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995925   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.996100   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.996212   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.996375   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.079205   71183 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:00:11.083998   71183 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:00:11.084028   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 19:00:11.084089   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 19:00:11.084158   71183 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 19:00:11.084241   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:00:11.093150   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:11.116894   71183 start.go:296] duration metric: took 124.294668ms for postStartSetup
	I0910 19:00:11.116938   71183 fix.go:56] duration metric: took 19.934731446s for fixHost
	I0910 19:00:11.116962   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.119482   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119784   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.119821   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.120176   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120331   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120501   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.120645   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:11.120868   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:11.120883   71183 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:00:11.217542   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994811.172877822
	
	I0910 19:00:11.217570   71183 fix.go:216] guest clock: 1725994811.172877822
	I0910 19:00:11.217577   71183 fix.go:229] Guest: 2024-09-10 19:00:11.172877822 +0000 UTC Remote: 2024-09-10 19:00:11.116943488 +0000 UTC m=+358.948412200 (delta=55.934334ms)
	I0910 19:00:11.217603   71183 fix.go:200] guest clock delta is within tolerance: 55.934334ms
	I0910 19:00:11.217607   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 20.035440196s
	I0910 19:00:11.217627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.217861   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:11.220855   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221282   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.221313   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221533   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222074   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222277   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222354   71183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 19:00:11.222402   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.222528   71183 ssh_runner.go:195] Run: cat /version.json
	I0910 19:00:11.222570   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.225205   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.225565   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225581   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225753   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.225934   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226035   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.226062   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.226109   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226207   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.226283   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.226370   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226535   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226668   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.297642   71183 ssh_runner.go:195] Run: systemctl --version
	I0910 19:00:11.322486   71183 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 19:00:11.470402   71183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 19:00:11.477843   71183 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:00:11.477903   71183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:00:11.495518   71183 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:00:11.495542   71183 start.go:495] detecting cgroup driver to use...
	I0910 19:00:11.495597   71183 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:00:11.512467   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:00:11.526665   71183 docker.go:217] disabling cri-docker service (if available) ...
	I0910 19:00:11.526732   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 19:00:11.540445   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 19:00:11.554386   71183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 19:00:11.682012   71183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 19:00:11.846239   71183 docker.go:233] disabling docker service ...
	I0910 19:00:11.846303   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 19:00:11.860981   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 19:00:11.874271   71183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 19:00:12.005716   71183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 19:00:12.137151   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 19:00:12.151156   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:00:12.170086   71183 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 19:00:12.170150   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.180741   71183 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 19:00:12.180804   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.190933   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.200885   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:07.840772   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.341153   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.840737   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.340471   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.840262   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.340827   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.840645   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.340524   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.840521   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.340560   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.210950   71183 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:00:12.221730   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.232931   71183 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.251318   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.261473   71183 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:00:12.270818   71183 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 19:00:12.270873   71183 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 19:00:12.284581   71183 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:00:12.294214   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:12.424646   71183 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 19:00:12.517553   71183 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 19:00:12.517633   71183 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 19:00:12.522728   71183 start.go:563] Will wait 60s for crictl version
	I0910 19:00:12.522775   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:00:12.526754   71183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:00:12.569377   71183 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 19:00:12.569454   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.597783   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.632619   71183 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 19:00:12.725298   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:15.223906   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:14.035868   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:16.534058   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.633800   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:12.637104   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637447   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:12.637476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637684   71183 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 19:00:12.641996   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:12.654577   71183 kubeadm.go:883] updating cluster {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:00:12.654684   71183 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:00:12.654737   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:12.694585   71183 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 19:00:12.694644   71183 ssh_runner.go:195] Run: which lz4
	I0910 19:00:12.699764   71183 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 19:00:12.705406   71183 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:00:12.705437   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 19:00:14.054131   71183 crio.go:462] duration metric: took 1.354391682s to copy over tarball
	I0910 19:00:14.054206   71183 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 19:00:16.114941   71183 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.06070257s)
	I0910 19:00:16.114968   71183 crio.go:469] duration metric: took 2.060808083s to extract the tarball
	I0910 19:00:16.114978   71183 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 19:00:16.153934   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:16.199988   71183 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 19:00:16.200008   71183 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:00:16.200015   71183 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.0 crio true true} ...
	I0910 19:00:16.200109   71183 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-836868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:00:16.200168   71183 ssh_runner.go:195] Run: crio config
	I0910 19:00:16.249409   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:16.249430   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:16.249443   71183 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 19:00:16.249462   71183 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-836868 NodeName:embed-certs-836868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:00:16.249596   71183 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-836868"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:00:16.249652   71183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:00:16.265984   71183 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:00:16.266062   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:00:16.276007   71183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0910 19:00:16.291971   71183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:00:16.307712   71183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0910 19:00:16.323789   71183 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0910 19:00:16.327478   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:16.339545   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:16.470249   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:16.487798   71183 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868 for IP: 192.168.39.107
	I0910 19:00:16.487838   71183 certs.go:194] generating shared ca certs ...
	I0910 19:00:16.487858   71183 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:16.488058   71183 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 19:00:16.488110   71183 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 19:00:16.488124   71183 certs.go:256] generating profile certs ...
	I0910 19:00:16.488243   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/client.key
	I0910 19:00:16.488307   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key.04acd22a
	I0910 19:00:16.488355   71183 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key
	I0910 19:00:16.488507   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 19:00:16.488547   71183 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 19:00:16.488560   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 19:00:16.488593   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 19:00:16.488633   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 19:00:16.488669   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 19:00:16.488856   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:16.489528   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:00:16.529980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 19:00:16.568653   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:00:16.593924   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 19:00:16.628058   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0910 19:00:16.669209   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:00:16.693274   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:00:16.716323   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:00:16.740155   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:00:16.763908   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 19:00:16.787980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 19:00:16.811754   71183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:00:16.828151   71183 ssh_runner.go:195] Run: openssl version
	I0910 19:00:16.834095   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:00:16.845376   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850178   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850230   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.856507   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:00:16.868105   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 19:00:16.879950   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884778   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884823   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.890715   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 19:00:16.903523   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 19:00:16.914585   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919105   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919151   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.924965   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:00:16.935579   71183 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:00:16.939895   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:00:16.945595   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:00:16.951247   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:00:16.956938   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:00:16.962908   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:00:16.968664   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:00:16.974624   71183 kubeadm.go:392] StartCluster: {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:00:16.974725   71183 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 19:00:16.974778   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.012869   71183 cri.go:89] found id: ""
	I0910 19:00:17.012947   71183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:00:17.023781   71183 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 19:00:17.023798   71183 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 19:00:17.023846   71183 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 19:00:17.034549   71183 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:00:17.035566   71183 kubeconfig.go:125] found "embed-certs-836868" server: "https://192.168.39.107:8443"
	I0910 19:00:17.037751   71183 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:00:17.047667   71183 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.107
	I0910 19:00:17.047696   71183 kubeadm.go:1160] stopping kube-system containers ...
	I0910 19:00:17.047708   71183 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 19:00:17.047747   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.083130   71183 cri.go:89] found id: ""
	I0910 19:00:17.083200   71183 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 19:00:17.101035   71183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:00:17.111335   71183 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:00:17.111357   71183 kubeadm.go:157] found existing configuration files:
	
	I0910 19:00:17.111414   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:00:17.120543   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:00:17.120593   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:00:17.130938   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:00:17.140688   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:00:17.140747   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:00:17.150637   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.160483   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:00:17.160520   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.170417   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:00:17.179778   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:00:17.179827   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:00:17.189197   71183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:00:17.199264   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:12.841060   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.340347   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.841136   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.840913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.341205   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.840692   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.340839   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.841050   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.341340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.224985   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:19.231248   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:18.534658   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:20.534807   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:17.309791   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.257162   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.482216   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.555094   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.645089   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:18.645178   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.146266   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.645546   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.146275   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.645291   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.662158   71183 api_server.go:72] duration metric: took 2.017082575s to wait for apiserver process to appear ...
	I0910 19:00:20.662183   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:00:20.662204   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:17.840510   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.340821   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.841156   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.340316   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.840339   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.341140   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.841333   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.340342   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.840282   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:22.340361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.326005   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.326036   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.326048   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.346004   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.346035   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.662353   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.669314   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:23.669344   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.162975   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.170262   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:24.170298   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.662865   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.667320   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:00:24.674393   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:00:24.674418   71183 api_server.go:131] duration metric: took 4.01222766s to wait for apiserver health ...
	I0910 19:00:24.674427   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:24.674433   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:24.676229   71183 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:00:24.677519   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:00:24.692951   71183 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:00:24.718355   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:00:24.732731   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:00:24.732758   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:00:24.732764   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:00:24.732775   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:00:24.732781   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:00:24.732798   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 19:00:24.732808   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:00:24.732817   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:00:24.732823   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:00:24.732835   71183 system_pods.go:74] duration metric: took 14.459216ms to wait for pod list to return data ...
	I0910 19:00:24.732846   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:00:24.742472   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:00:24.742497   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:00:24.742507   71183 node_conditions.go:105] duration metric: took 9.657853ms to run NodePressure ...
	I0910 19:00:24.742523   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:25.021719   71183 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026163   71183 kubeadm.go:739] kubelet initialised
	I0910 19:00:25.026187   71183 kubeadm.go:740] duration metric: took 4.442058ms waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026196   71183 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:25.030895   71183 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.035021   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035044   71183 pod_ready.go:82] duration metric: took 4.12756ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.035055   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035064   71183 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.039362   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039381   71183 pod_ready.go:82] duration metric: took 4.309293ms for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.039389   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039394   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.049142   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049164   71183 pod_ready.go:82] duration metric: took 9.762471ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.049175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049182   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.122255   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122285   71183 pod_ready.go:82] duration metric: took 73.09407ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.122295   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122301   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.522122   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522160   71183 pod_ready.go:82] duration metric: took 399.850787ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.522175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522185   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.921918   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921947   71183 pod_ready.go:82] duration metric: took 399.75274ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.921956   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921962   71183 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:26.322195   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322219   71183 pod_ready.go:82] duration metric: took 400.248825ms for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:26.322228   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322235   71183 pod_ready.go:39] duration metric: took 1.296028669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:26.322251   71183 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:00:26.333796   71183 ops.go:34] apiserver oom_adj: -16
	I0910 19:00:26.333824   71183 kubeadm.go:597] duration metric: took 9.310018521s to restartPrimaryControlPlane
	I0910 19:00:26.333834   71183 kubeadm.go:394] duration metric: took 9.359219145s to StartCluster
	I0910 19:00:26.333850   71183 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.333920   71183 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:00:26.336496   71183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.336792   71183 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:00:26.336863   71183 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:00:26.336935   71183 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-836868"
	I0910 19:00:26.336969   71183 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-836868"
	W0910 19:00:26.336980   71183 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:00:26.336995   71183 addons.go:69] Setting default-storageclass=true in profile "embed-certs-836868"
	I0910 19:00:26.337050   71183 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-836868"
	I0910 19:00:26.337058   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:26.337050   71183 addons.go:69] Setting metrics-server=true in profile "embed-certs-836868"
	I0910 19:00:26.337011   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337146   71183 addons.go:234] Setting addon metrics-server=true in "embed-certs-836868"
	W0910 19:00:26.337165   71183 addons.go:243] addon metrics-server should already be in state true
	I0910 19:00:26.337234   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337501   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337547   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337552   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337583   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337638   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337677   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.339741   71183 out.go:177] * Verifying Kubernetes components...
	I0910 19:00:26.341792   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:26.354154   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0910 19:00:26.354750   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.355345   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.355379   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.355756   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.356316   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0910 19:00:26.356389   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.356428   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.356508   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35641
	I0910 19:00:26.356810   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.356893   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.357384   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.357411   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361164   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.361278   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.361302   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361363   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.361709   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.362446   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.362483   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.364762   71183 addons.go:234] Setting addon default-storageclass=true in "embed-certs-836868"
	W0910 19:00:26.364786   71183 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:00:26.364814   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.365165   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.365230   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.379158   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0910 19:00:26.379696   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.380235   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.380266   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.380654   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.380865   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.382030   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0910 19:00:26.382358   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.382892   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.382912   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.382928   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.383271   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.383441   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.385129   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.385171   71183 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:00:26.385687   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0910 19:00:26.386001   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.386217   71183 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:00:21.723833   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.724422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.724456   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.034262   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.035125   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:26.386227   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:00:26.386289   71183 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:00:26.386309   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.386518   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.386533   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.386931   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.387566   71183 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.387651   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:00:26.387672   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.387618   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.387760   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.389782   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.389941   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.390190   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.390263   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.390558   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.390744   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.390921   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.391058   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.391585   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391788   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.391941   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.392097   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.392256   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.404601   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0910 19:00:26.405167   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.406097   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.406655   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.407006   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.407163   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.409223   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.409437   71183 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.409454   71183 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:00:26.409470   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.412388   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.412812   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.412831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.413010   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.413177   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.413333   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.413474   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.533906   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:26.552203   71183 node_ready.go:35] waiting up to 6m0s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:26.687774   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:00:26.687804   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:00:26.690124   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.737647   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:00:26.737673   71183 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:00:26.739650   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.783096   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:26.783125   71183 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:00:26.828766   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:22.841048   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.341180   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.841325   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.340485   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.841340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.340935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.840886   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.340826   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.840344   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.341189   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.844896   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154733205s)
	I0910 19:00:27.844931   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105250764s)
	I0910 19:00:27.844944   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844969   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844979   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.844980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845406   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845420   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845434   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845446   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.845464   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.845471   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845702   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845733   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845747   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847084   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847101   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847110   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.847118   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.847308   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847323   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.852938   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.852956   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.853198   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.853219   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.853224   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.879527   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.05071539s)
	I0910 19:00:27.879577   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.879597   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880030   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880050   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880059   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.880081   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880381   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880405   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880416   71183 addons.go:475] Verifying addon metrics-server=true in "embed-certs-836868"
	I0910 19:00:27.880383   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.883034   71183 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:00:28.222881   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.223636   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.034633   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.884243   71183 addons.go:510] duration metric: took 1.547392632s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:00:28.556786   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:31.055519   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:27.840306   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.340657   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.841179   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.340881   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.840957   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.341260   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.841151   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.840360   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.341199   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.724435   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:35.223194   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.533611   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:34.534941   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.034007   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:33.056381   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:34.056156   71183 node_ready.go:49] node "embed-certs-836868" has status "Ready":"True"
	I0910 19:00:34.056191   71183 node_ready.go:38] duration metric: took 7.503955102s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:34.056200   71183 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:34.063331   71183 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068294   71183 pod_ready.go:93] pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:34.068322   71183 pod_ready.go:82] duration metric: took 4.96275ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068335   71183 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:36.077798   71183 pod_ready.go:103] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.841192   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.340518   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.840995   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.341016   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.840480   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.340647   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.840416   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.340921   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.340956   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.224065   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.723852   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.533725   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.534430   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.576189   71183 pod_ready.go:93] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.576218   71183 pod_ready.go:82] duration metric: took 3.507872898s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.576238   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582150   71183 pod_ready.go:93] pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.582167   71183 pod_ready.go:82] duration metric: took 5.921544ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582175   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586941   71183 pod_ready.go:93] pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.586956   71183 pod_ready.go:82] duration metric: took 4.774648ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586963   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591829   71183 pod_ready.go:93] pod "kube-proxy-4fddv" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.591846   71183 pod_ready.go:82] duration metric: took 4.876938ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591854   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657930   71183 pod_ready.go:93] pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.657952   71183 pod_ready.go:82] duration metric: took 66.092785ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657962   71183 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:39.665465   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.841210   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.341302   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.340558   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.840395   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.341022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.841093   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.341228   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.841103   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.340329   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.223446   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.223533   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.224840   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.033565   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.034402   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.164336   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.164983   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:42.841000   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.341147   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.840534   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.340988   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.340859   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.840877   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.841175   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:47.341064   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.722930   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.723539   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.036816   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.534367   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.667433   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:51.164114   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:47.841037   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.341204   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.840961   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.340679   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.841173   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.340751   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.841158   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.340999   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.840349   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.340383   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.723945   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.224168   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.034234   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.533690   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.164294   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.666369   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:52.840991   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.340439   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.840487   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.340407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.840619   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.340844   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.841190   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.340927   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.724247   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.223715   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:58.033639   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.034297   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.670234   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.164278   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.164755   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.840798   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.340905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.841330   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.340743   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.840256   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.340970   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.840732   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:01.340927   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:01.341014   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:01.378922   72122 cri.go:89] found id: ""
	I0910 19:01:01.378953   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.378964   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:01.378971   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:01.379032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:01.413274   72122 cri.go:89] found id: ""
	I0910 19:01:01.413302   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.413313   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:01.413320   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:01.413383   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:01.449165   72122 cri.go:89] found id: ""
	I0910 19:01:01.449204   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.449215   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:01.449221   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:01.449291   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:01.484627   72122 cri.go:89] found id: ""
	I0910 19:01:01.484650   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.484657   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:01.484663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:01.484720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:01.519332   72122 cri.go:89] found id: ""
	I0910 19:01:01.519357   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.519364   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:01.519370   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:01.519424   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:01.554080   72122 cri.go:89] found id: ""
	I0910 19:01:01.554102   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.554109   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:01.554114   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:01.554160   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:01.590100   72122 cri.go:89] found id: ""
	I0910 19:01:01.590131   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.590143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:01.590149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:01.590208   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:01.623007   72122 cri.go:89] found id: ""
	I0910 19:01:01.623034   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.623045   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:01.623055   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:01.623070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:01.679940   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:01.679971   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:01.694183   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:01.694218   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:01.826997   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:01.827025   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:01.827038   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:01.903885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:01.903926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:02.224039   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.224422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.533395   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:05.034075   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.665680   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:06.665874   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.450792   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:04.471427   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:04.471501   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:04.521450   72122 cri.go:89] found id: ""
	I0910 19:01:04.521484   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.521494   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:04.521503   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:04.521562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:04.577588   72122 cri.go:89] found id: ""
	I0910 19:01:04.577622   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.577633   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:04.577641   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:04.577707   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:04.615558   72122 cri.go:89] found id: ""
	I0910 19:01:04.615586   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.615594   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:04.615599   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:04.615652   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:04.655763   72122 cri.go:89] found id: ""
	I0910 19:01:04.655793   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.655806   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:04.655815   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:04.655881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:04.692620   72122 cri.go:89] found id: ""
	I0910 19:01:04.692642   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.692649   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:04.692654   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:04.692709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:04.730575   72122 cri.go:89] found id: ""
	I0910 19:01:04.730601   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.730611   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:04.730616   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:04.730665   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:04.766716   72122 cri.go:89] found id: ""
	I0910 19:01:04.766742   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.766749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:04.766754   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:04.766799   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:04.808122   72122 cri.go:89] found id: ""
	I0910 19:01:04.808151   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.808162   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:04.808173   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:04.808185   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:04.858563   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:04.858592   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:04.872323   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:04.872350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:04.942541   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:04.942571   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:04.942588   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:05.022303   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:05.022338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:06.723760   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:08.724550   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.223094   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.533060   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.534466   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:12.034244   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.163526   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.164502   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.562092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:07.575254   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:07.575308   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:07.616583   72122 cri.go:89] found id: ""
	I0910 19:01:07.616607   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.616615   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:07.616620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:07.616676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:07.654676   72122 cri.go:89] found id: ""
	I0910 19:01:07.654700   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.654711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:07.654718   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:07.654790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:07.690054   72122 cri.go:89] found id: ""
	I0910 19:01:07.690085   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.690096   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:07.690104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:07.690171   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:07.724273   72122 cri.go:89] found id: ""
	I0910 19:01:07.724295   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.724302   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:07.724307   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:07.724363   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:07.757621   72122 cri.go:89] found id: ""
	I0910 19:01:07.757646   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.757654   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:07.757660   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:07.757716   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:07.791502   72122 cri.go:89] found id: ""
	I0910 19:01:07.791533   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.791543   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:07.791557   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:07.791620   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:07.825542   72122 cri.go:89] found id: ""
	I0910 19:01:07.825577   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.825586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:07.825592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:07.825649   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:07.862278   72122 cri.go:89] found id: ""
	I0910 19:01:07.862303   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.862312   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:07.862320   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:07.862331   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:07.952016   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:07.952059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:07.997004   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:07.997034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:08.047745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:08.047783   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:08.064712   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:08.064736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:08.136822   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:10.637017   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:10.650113   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:10.650198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:10.687477   72122 cri.go:89] found id: ""
	I0910 19:01:10.687504   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.687513   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:10.687520   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:10.687594   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:10.721410   72122 cri.go:89] found id: ""
	I0910 19:01:10.721437   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.721447   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:10.721455   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:10.721514   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:10.757303   72122 cri.go:89] found id: ""
	I0910 19:01:10.757330   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.757338   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:10.757343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:10.757396   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:10.794761   72122 cri.go:89] found id: ""
	I0910 19:01:10.794788   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.794799   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:10.794806   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:10.794885   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:10.828631   72122 cri.go:89] found id: ""
	I0910 19:01:10.828657   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.828668   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:10.828675   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:10.828737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:10.863609   72122 cri.go:89] found id: ""
	I0910 19:01:10.863634   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.863641   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:10.863646   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:10.863734   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:10.899299   72122 cri.go:89] found id: ""
	I0910 19:01:10.899324   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.899335   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:10.899342   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:10.899403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:10.939233   72122 cri.go:89] found id: ""
	I0910 19:01:10.939259   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.939268   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:10.939277   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:10.939290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:10.976599   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:10.976627   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:11.029099   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:11.029144   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:11.045401   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:11.045426   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:11.119658   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:11.119679   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:11.119696   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:13.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.723673   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:14.034325   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:16.534463   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.663847   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.664387   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.698696   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:13.712317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:13.712386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:13.747442   72122 cri.go:89] found id: ""
	I0910 19:01:13.747470   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.747480   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:13.747487   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:13.747555   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:13.782984   72122 cri.go:89] found id: ""
	I0910 19:01:13.783008   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.783015   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:13.783021   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:13.783078   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:13.820221   72122 cri.go:89] found id: ""
	I0910 19:01:13.820245   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.820256   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:13.820262   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:13.820322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:13.854021   72122 cri.go:89] found id: ""
	I0910 19:01:13.854056   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.854068   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:13.854075   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:13.854138   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:13.888292   72122 cri.go:89] found id: ""
	I0910 19:01:13.888321   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.888331   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:13.888338   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:13.888398   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:13.922301   72122 cri.go:89] found id: ""
	I0910 19:01:13.922330   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.922341   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:13.922349   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:13.922408   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:13.959977   72122 cri.go:89] found id: ""
	I0910 19:01:13.960002   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.960010   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:13.960015   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:13.960074   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:13.995255   72122 cri.go:89] found id: ""
	I0910 19:01:13.995282   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.995293   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:13.995308   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:13.995323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:14.050760   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:14.050790   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:14.064694   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:14.064723   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:14.137406   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:14.137431   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:14.137447   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:14.216624   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:14.216657   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:16.765643   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:16.778746   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:16.778821   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:16.814967   72122 cri.go:89] found id: ""
	I0910 19:01:16.814999   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.815010   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:16.815017   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:16.815073   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:16.850306   72122 cri.go:89] found id: ""
	I0910 19:01:16.850334   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.850345   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:16.850352   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:16.850413   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:16.886104   72122 cri.go:89] found id: ""
	I0910 19:01:16.886134   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.886144   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:16.886152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:16.886218   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:16.921940   72122 cri.go:89] found id: ""
	I0910 19:01:16.921968   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.921977   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:16.921983   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:16.922032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:16.956132   72122 cri.go:89] found id: ""
	I0910 19:01:16.956166   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.956177   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:16.956185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:16.956247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:16.988240   72122 cri.go:89] found id: ""
	I0910 19:01:16.988269   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.988278   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:16.988284   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:16.988330   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:17.022252   72122 cri.go:89] found id: ""
	I0910 19:01:17.022281   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.022291   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:17.022297   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:17.022364   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:17.058664   72122 cri.go:89] found id: ""
	I0910 19:01:17.058693   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.058703   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:17.058715   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:17.058740   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:17.136927   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:17.136964   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:17.189427   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:17.189457   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:17.242193   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:17.242225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:17.257878   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:17.257908   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:17.330096   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:17.724465   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.224230   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:18.534806   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:21.034368   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:17.667897   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.165174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.165421   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:19.831030   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:19.844516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:19.844581   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:19.879878   72122 cri.go:89] found id: ""
	I0910 19:01:19.879908   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.879919   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:19.879927   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:19.879988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:19.915992   72122 cri.go:89] found id: ""
	I0910 19:01:19.916018   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.916025   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:19.916030   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:19.916084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:19.949206   72122 cri.go:89] found id: ""
	I0910 19:01:19.949232   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.949242   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:19.949249   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:19.949311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:19.983011   72122 cri.go:89] found id: ""
	I0910 19:01:19.983035   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.983043   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:19.983048   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:19.983096   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:20.018372   72122 cri.go:89] found id: ""
	I0910 19:01:20.018394   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.018402   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:20.018408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:20.018466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:20.053941   72122 cri.go:89] found id: ""
	I0910 19:01:20.053967   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.053975   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:20.053980   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:20.054037   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:20.084999   72122 cri.go:89] found id: ""
	I0910 19:01:20.085026   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.085035   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:20.085042   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:20.085115   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:20.124036   72122 cri.go:89] found id: ""
	I0910 19:01:20.124063   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.124072   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:20.124086   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:20.124103   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:20.176917   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:20.176944   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:20.190831   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:20.190852   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:20.257921   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:20.257942   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:20.257954   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:20.335320   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:20.335350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:22.723788   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.223765   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:23.034456   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.534821   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:24.663208   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:26.664282   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.875167   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:22.888803   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:22.888858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:22.922224   72122 cri.go:89] found id: ""
	I0910 19:01:22.922252   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.922264   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:22.922270   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:22.922328   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:22.959502   72122 cri.go:89] found id: ""
	I0910 19:01:22.959536   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.959546   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:22.959553   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:22.959619   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:22.992914   72122 cri.go:89] found id: ""
	I0910 19:01:22.992944   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.992955   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:22.992962   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:22.993022   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:23.028342   72122 cri.go:89] found id: ""
	I0910 19:01:23.028367   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.028376   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:23.028384   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:23.028443   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:23.064715   72122 cri.go:89] found id: ""
	I0910 19:01:23.064742   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.064753   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:23.064761   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:23.064819   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:23.100752   72122 cri.go:89] found id: ""
	I0910 19:01:23.100781   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.100789   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:23.100795   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:23.100857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:23.136017   72122 cri.go:89] found id: ""
	I0910 19:01:23.136045   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.136055   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:23.136062   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:23.136108   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:23.170787   72122 cri.go:89] found id: ""
	I0910 19:01:23.170811   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.170819   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:23.170826   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:23.170840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:23.210031   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:23.210059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:23.261525   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:23.261557   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:23.275611   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:23.275636   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:23.348543   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:23.348568   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:23.348582   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:25.929406   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:25.942658   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:25.942737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:25.977231   72122 cri.go:89] found id: ""
	I0910 19:01:25.977260   72122 logs.go:276] 0 containers: []
	W0910 19:01:25.977270   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:25.977277   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:25.977336   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:26.015060   72122 cri.go:89] found id: ""
	I0910 19:01:26.015093   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.015103   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:26.015110   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:26.015180   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:26.053618   72122 cri.go:89] found id: ""
	I0910 19:01:26.053643   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.053651   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:26.053656   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:26.053712   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:26.090489   72122 cri.go:89] found id: ""
	I0910 19:01:26.090515   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.090523   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:26.090529   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:26.090587   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:26.126687   72122 cri.go:89] found id: ""
	I0910 19:01:26.126710   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.126718   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:26.126723   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:26.126771   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:26.160901   72122 cri.go:89] found id: ""
	I0910 19:01:26.160939   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.160951   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:26.160959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:26.161017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:26.195703   72122 cri.go:89] found id: ""
	I0910 19:01:26.195728   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.195737   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:26.195743   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:26.195794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:26.230394   72122 cri.go:89] found id: ""
	I0910 19:01:26.230414   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.230422   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:26.230430   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:26.230444   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:26.296884   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:26.296905   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:26.296926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:26.371536   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:26.371569   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:26.412926   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:26.412958   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:26.462521   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:26.462550   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:27.725957   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.224312   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.034338   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.034794   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.035284   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.668205   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:31.166271   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.976550   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:28.989517   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:28.989586   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:29.025638   72122 cri.go:89] found id: ""
	I0910 19:01:29.025662   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.025671   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:29.025677   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:29.025719   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:29.067473   72122 cri.go:89] found id: ""
	I0910 19:01:29.067495   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.067502   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:29.067507   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:29.067556   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:29.105587   72122 cri.go:89] found id: ""
	I0910 19:01:29.105616   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.105628   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:29.105635   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:29.105696   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:29.142427   72122 cri.go:89] found id: ""
	I0910 19:01:29.142458   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.142468   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:29.142474   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:29.142530   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:29.178553   72122 cri.go:89] found id: ""
	I0910 19:01:29.178575   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.178582   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:29.178587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:29.178638   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:29.212997   72122 cri.go:89] found id: ""
	I0910 19:01:29.213025   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.213034   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:29.213040   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:29.213109   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:29.247057   72122 cri.go:89] found id: ""
	I0910 19:01:29.247083   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.247091   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:29.247097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:29.247151   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:29.285042   72122 cri.go:89] found id: ""
	I0910 19:01:29.285084   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.285096   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:29.285107   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:29.285131   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:29.336003   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:29.336033   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:29.349867   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:29.349890   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:29.422006   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:29.422028   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:29.422043   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:29.504047   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:29.504079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:32.050723   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:32.063851   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:32.063904   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:32.100816   72122 cri.go:89] found id: ""
	I0910 19:01:32.100841   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.100851   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:32.100858   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:32.100924   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:32.134863   72122 cri.go:89] found id: ""
	I0910 19:01:32.134892   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.134902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:32.134909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:32.134967   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:32.169873   72122 cri.go:89] found id: ""
	I0910 19:01:32.169901   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.169912   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:32.169919   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:32.169973   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:32.202161   72122 cri.go:89] found id: ""
	I0910 19:01:32.202187   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.202197   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:32.202204   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:32.202264   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:32.236850   72122 cri.go:89] found id: ""
	I0910 19:01:32.236879   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.236888   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:32.236896   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:32.236957   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:32.271479   72122 cri.go:89] found id: ""
	I0910 19:01:32.271511   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.271530   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:32.271542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:32.271701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:32.306724   72122 cri.go:89] found id: ""
	I0910 19:01:32.306747   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.306754   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:32.306760   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:32.306811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:32.341153   72122 cri.go:89] found id: ""
	I0910 19:01:32.341184   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.341195   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:32.341206   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:32.341221   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:32.393087   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:32.393122   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:32.406565   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:32.406591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:32.478030   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:32.478048   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:32.478079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:32.224371   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.723372   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.533510   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:37.033933   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:33.671725   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:36.165396   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.568440   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:32.568478   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:35.112022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:35.125210   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:35.125286   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:35.160716   72122 cri.go:89] found id: ""
	I0910 19:01:35.160743   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.160753   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:35.160759   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:35.160817   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:35.196500   72122 cri.go:89] found id: ""
	I0910 19:01:35.196530   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.196541   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:35.196548   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:35.196622   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:35.232476   72122 cri.go:89] found id: ""
	I0910 19:01:35.232510   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.232521   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:35.232528   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:35.232590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:35.269612   72122 cri.go:89] found id: ""
	I0910 19:01:35.269635   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.269644   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:35.269649   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:35.269697   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:35.307368   72122 cri.go:89] found id: ""
	I0910 19:01:35.307393   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.307401   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:35.307408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:35.307475   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:35.342079   72122 cri.go:89] found id: ""
	I0910 19:01:35.342108   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.342119   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:35.342126   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:35.342188   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:35.379732   72122 cri.go:89] found id: ""
	I0910 19:01:35.379761   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.379771   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:35.379778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:35.379840   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:35.419067   72122 cri.go:89] found id: ""
	I0910 19:01:35.419098   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.419109   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:35.419120   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:35.419139   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:35.472459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:35.472494   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:35.487044   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:35.487078   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:35.565242   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:35.565264   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:35.565282   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:35.645918   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:35.645951   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:36.724330   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.724368   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.224272   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:39.533968   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.534579   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.666059   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.164158   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.189238   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:38.203973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:38.204035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:38.241402   72122 cri.go:89] found id: ""
	I0910 19:01:38.241428   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.241438   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:38.241446   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:38.241506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:38.280657   72122 cri.go:89] found id: ""
	I0910 19:01:38.280685   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.280693   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:38.280698   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:38.280753   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:38.319697   72122 cri.go:89] found id: ""
	I0910 19:01:38.319725   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.319735   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:38.319742   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:38.319804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:38.356766   72122 cri.go:89] found id: ""
	I0910 19:01:38.356799   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.356810   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:38.356817   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:38.356876   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:38.395468   72122 cri.go:89] found id: ""
	I0910 19:01:38.395497   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.395508   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:38.395516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:38.395577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:38.434942   72122 cri.go:89] found id: ""
	I0910 19:01:38.434965   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.434974   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:38.434979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:38.435025   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:38.470687   72122 cri.go:89] found id: ""
	I0910 19:01:38.470715   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.470724   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:38.470729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:38.470777   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:38.505363   72122 cri.go:89] found id: ""
	I0910 19:01:38.505394   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.505405   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:38.505417   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:38.505432   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:38.557735   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:38.557770   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:38.586094   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:38.586128   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:38.665190   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:38.665215   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:38.665231   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:38.743748   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:38.743779   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:41.284310   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:41.299086   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:41.299157   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:41.340453   72122 cri.go:89] found id: ""
	I0910 19:01:41.340476   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.340484   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:41.340489   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:41.340544   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:41.374028   72122 cri.go:89] found id: ""
	I0910 19:01:41.374052   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.374060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:41.374066   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:41.374117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:41.413888   72122 cri.go:89] found id: ""
	I0910 19:01:41.413915   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.413929   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:41.413935   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:41.413994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:41.450846   72122 cri.go:89] found id: ""
	I0910 19:01:41.450873   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.450883   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:41.450890   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:41.450950   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:41.484080   72122 cri.go:89] found id: ""
	I0910 19:01:41.484107   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.484115   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:41.484120   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:41.484168   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:41.523652   72122 cri.go:89] found id: ""
	I0910 19:01:41.523677   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.523685   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:41.523690   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:41.523749   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:41.563690   72122 cri.go:89] found id: ""
	I0910 19:01:41.563715   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.563727   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:41.563734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:41.563797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:41.602101   72122 cri.go:89] found id: ""
	I0910 19:01:41.602122   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.602130   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:41.602137   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:41.602152   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:41.655459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:41.655488   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:41.670037   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:41.670062   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:41.741399   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:41.741417   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:41.741428   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:41.817411   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:41.817445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:43.726285   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.223867   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.034404   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.533246   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:43.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.164675   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.363631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:44.378279   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:44.378344   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:44.412450   72122 cri.go:89] found id: ""
	I0910 19:01:44.412486   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.412495   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:44.412502   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:44.412569   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:44.448378   72122 cri.go:89] found id: ""
	I0910 19:01:44.448407   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.448415   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:44.448420   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:44.448470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:44.483478   72122 cri.go:89] found id: ""
	I0910 19:01:44.483516   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.483524   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:44.483530   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:44.483584   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:44.517787   72122 cri.go:89] found id: ""
	I0910 19:01:44.517812   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.517822   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:44.517828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:44.517886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:44.554909   72122 cri.go:89] found id: ""
	I0910 19:01:44.554939   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.554950   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:44.554957   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:44.555018   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:44.589865   72122 cri.go:89] found id: ""
	I0910 19:01:44.589890   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.589909   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:44.589923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:44.589968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:44.626712   72122 cri.go:89] found id: ""
	I0910 19:01:44.626739   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.626749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:44.626756   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:44.626815   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:44.664985   72122 cri.go:89] found id: ""
	I0910 19:01:44.665067   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.665103   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:44.665114   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:44.665165   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:44.721160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:44.721196   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:44.735339   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:44.735366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:44.810056   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:44.810080   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:44.810094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:44.898822   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:44.898871   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:47.438440   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:47.451438   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:47.451506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:48.723661   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.723768   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.534671   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:51.033397   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.164739   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.665165   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:47.491703   72122 cri.go:89] found id: ""
	I0910 19:01:47.491729   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.491740   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:47.491747   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:47.491811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:47.526834   72122 cri.go:89] found id: ""
	I0910 19:01:47.526862   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.526874   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:47.526880   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:47.526940   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:47.570463   72122 cri.go:89] found id: ""
	I0910 19:01:47.570488   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.570496   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:47.570503   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:47.570545   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:47.608691   72122 cri.go:89] found id: ""
	I0910 19:01:47.608715   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.608727   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:47.608734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:47.608780   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:47.648279   72122 cri.go:89] found id: ""
	I0910 19:01:47.648308   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.648316   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:47.648324   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:47.648386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:47.684861   72122 cri.go:89] found id: ""
	I0910 19:01:47.684885   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.684892   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:47.684897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:47.684947   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:47.721004   72122 cri.go:89] found id: ""
	I0910 19:01:47.721037   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.721049   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:47.721056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:47.721134   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:47.756154   72122 cri.go:89] found id: ""
	I0910 19:01:47.756181   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.756192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:47.756202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:47.756217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:47.806860   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:47.806889   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:47.822419   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:47.822445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:47.891966   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:47.891986   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:47.892000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:47.978510   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:47.978561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.519264   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:50.533576   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:50.533630   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:50.567574   72122 cri.go:89] found id: ""
	I0910 19:01:50.567601   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.567612   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:50.567619   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:50.567678   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:50.608824   72122 cri.go:89] found id: ""
	I0910 19:01:50.608850   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.608858   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:50.608863   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:50.608939   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:50.644502   72122 cri.go:89] found id: ""
	I0910 19:01:50.644530   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.644538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:50.644544   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:50.644590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:50.682309   72122 cri.go:89] found id: ""
	I0910 19:01:50.682332   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.682340   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:50.682345   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:50.682404   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:50.735372   72122 cri.go:89] found id: ""
	I0910 19:01:50.735398   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.735410   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:50.735418   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:50.735482   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:50.786364   72122 cri.go:89] found id: ""
	I0910 19:01:50.786391   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.786401   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:50.786408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:50.786464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:50.831525   72122 cri.go:89] found id: ""
	I0910 19:01:50.831564   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.831575   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:50.831582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:50.831645   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:50.873457   72122 cri.go:89] found id: ""
	I0910 19:01:50.873482   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.873493   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:50.873503   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:50.873524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:50.956032   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:50.956069   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.996871   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:50.996904   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:51.047799   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:51.047824   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:51.061946   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:51.061970   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:51.136302   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:53.222492   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.223835   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.034478   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.532623   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:52.665991   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.164343   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.636464   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:53.649971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:53.650054   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:53.688172   72122 cri.go:89] found id: ""
	I0910 19:01:53.688201   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.688211   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:53.688217   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:53.688274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:53.725094   72122 cri.go:89] found id: ""
	I0910 19:01:53.725119   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.725128   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:53.725135   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:53.725196   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:53.763866   72122 cri.go:89] found id: ""
	I0910 19:01:53.763893   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.763907   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:53.763914   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:53.763971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:53.797760   72122 cri.go:89] found id: ""
	I0910 19:01:53.797787   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.797798   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:53.797805   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:53.797862   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:53.830305   72122 cri.go:89] found id: ""
	I0910 19:01:53.830332   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.830340   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:53.830346   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:53.830402   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:53.861970   72122 cri.go:89] found id: ""
	I0910 19:01:53.861995   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.862003   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:53.862009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:53.862059   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:53.896577   72122 cri.go:89] found id: ""
	I0910 19:01:53.896600   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.896609   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:53.896614   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:53.896660   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:53.935051   72122 cri.go:89] found id: ""
	I0910 19:01:53.935077   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.935086   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:53.935094   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:53.935105   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:53.950252   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:53.950276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:54.023327   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:54.023346   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:54.023361   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:54.101605   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:54.101643   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:54.142906   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:54.142930   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:56.697701   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:56.717755   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:56.717836   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:56.763564   72122 cri.go:89] found id: ""
	I0910 19:01:56.763594   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.763606   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:56.763613   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:56.763675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:56.815780   72122 cri.go:89] found id: ""
	I0910 19:01:56.815808   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.815816   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:56.815821   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:56.815883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:56.848983   72122 cri.go:89] found id: ""
	I0910 19:01:56.849013   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.849024   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:56.849032   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:56.849100   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:56.880660   72122 cri.go:89] found id: ""
	I0910 19:01:56.880690   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.880702   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:56.880709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:56.880756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:56.922836   72122 cri.go:89] found id: ""
	I0910 19:01:56.922860   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.922867   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:56.922873   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:56.922938   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:56.963474   72122 cri.go:89] found id: ""
	I0910 19:01:56.963505   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.963517   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:56.963524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:56.963585   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:56.996837   72122 cri.go:89] found id: ""
	I0910 19:01:56.996864   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.996872   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:56.996877   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:56.996925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:57.029594   72122 cri.go:89] found id: ""
	I0910 19:01:57.029629   72122 logs.go:276] 0 containers: []
	W0910 19:01:57.029640   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:57.029651   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:57.029664   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:57.083745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:57.083772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:57.099269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:57.099293   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:57.174098   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:57.174118   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:57.174129   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:57.258833   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:57.258869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:57.224384   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.722547   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.533178   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.533798   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.035089   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.665383   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:00.164920   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.800644   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:59.814728   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:59.814805   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:59.854081   72122 cri.go:89] found id: ""
	I0910 19:01:59.854113   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.854124   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:59.854133   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:59.854197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:59.889524   72122 cri.go:89] found id: ""
	I0910 19:01:59.889550   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.889560   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:59.889567   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:59.889626   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:59.925833   72122 cri.go:89] found id: ""
	I0910 19:01:59.925859   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.925866   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:59.925872   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:59.925935   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:59.962538   72122 cri.go:89] found id: ""
	I0910 19:01:59.962575   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.962586   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:59.962593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:59.962650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:59.996994   72122 cri.go:89] found id: ""
	I0910 19:01:59.997025   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.997037   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:59.997045   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:59.997126   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:00.032881   72122 cri.go:89] found id: ""
	I0910 19:02:00.032905   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.032915   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:00.032923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:00.032988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:00.065838   72122 cri.go:89] found id: ""
	I0910 19:02:00.065861   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.065869   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:00.065874   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:00.065927   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:00.099479   72122 cri.go:89] found id: ""
	I0910 19:02:00.099505   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.099516   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:00.099526   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:00.099540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:00.182661   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:00.182689   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:00.223514   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:00.223553   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:00.273695   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:00.273721   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:00.287207   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:00.287233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:00.353975   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:01.724647   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.224071   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.225475   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.534230   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.534474   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.665228   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.667935   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:07.163506   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.854145   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:02.867413   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:02.867484   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:02.904299   72122 cri.go:89] found id: ""
	I0910 19:02:02.904327   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.904335   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:02.904340   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:02.904392   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:02.940981   72122 cri.go:89] found id: ""
	I0910 19:02:02.941010   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.941019   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:02.941024   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:02.941099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:02.980013   72122 cri.go:89] found id: ""
	I0910 19:02:02.980038   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.980046   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:02.980052   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:02.980111   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:03.020041   72122 cri.go:89] found id: ""
	I0910 19:02:03.020071   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.020080   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:03.020087   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:03.020144   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:03.055228   72122 cri.go:89] found id: ""
	I0910 19:02:03.055264   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.055277   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:03.055285   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:03.055347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:03.088696   72122 cri.go:89] found id: ""
	I0910 19:02:03.088722   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.088730   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:03.088736   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:03.088787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:03.124753   72122 cri.go:89] found id: ""
	I0910 19:02:03.124776   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.124785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:03.124792   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:03.124849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:03.157191   72122 cri.go:89] found id: ""
	I0910 19:02:03.157222   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.157230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:03.157238   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:03.157248   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:03.239015   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:03.239044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:03.279323   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:03.279355   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:03.328034   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:03.328067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:03.341591   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:03.341620   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:03.411057   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:05.911503   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:05.924794   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:05.924868   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:05.958827   72122 cri.go:89] found id: ""
	I0910 19:02:05.958852   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.958859   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:05.958865   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:05.958920   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:05.992376   72122 cri.go:89] found id: ""
	I0910 19:02:05.992412   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.992423   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:05.992429   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:05.992485   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:06.028058   72122 cri.go:89] found id: ""
	I0910 19:02:06.028088   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.028098   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:06.028107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:06.028162   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:06.066428   72122 cri.go:89] found id: ""
	I0910 19:02:06.066458   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.066470   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:06.066477   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:06.066533   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:06.102750   72122 cri.go:89] found id: ""
	I0910 19:02:06.102774   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.102782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:06.102787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:06.102841   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:06.137216   72122 cri.go:89] found id: ""
	I0910 19:02:06.137243   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.137254   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:06.137261   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:06.137323   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:06.175227   72122 cri.go:89] found id: ""
	I0910 19:02:06.175251   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.175259   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:06.175265   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:06.175311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:06.210197   72122 cri.go:89] found id: ""
	I0910 19:02:06.210222   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.210230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:06.210238   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:06.210249   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:06.261317   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:06.261353   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:06.275196   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:06.275225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:06.354186   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:06.354205   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:06.354219   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:06.436726   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:06.436763   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:08.723505   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:10.724499   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.035939   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.534648   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.166629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.666941   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:08.979157   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:08.992097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:08.992156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:09.025260   72122 cri.go:89] found id: ""
	I0910 19:02:09.025282   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.025289   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:09.025295   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:09.025360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:09.059139   72122 cri.go:89] found id: ""
	I0910 19:02:09.059166   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.059177   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:09.059186   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:09.059240   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:09.092935   72122 cri.go:89] found id: ""
	I0910 19:02:09.092964   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.092973   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:09.092979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:09.093027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:09.127273   72122 cri.go:89] found id: ""
	I0910 19:02:09.127299   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.127310   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:09.127317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:09.127367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:09.163353   72122 cri.go:89] found id: ""
	I0910 19:02:09.163380   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.163389   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:09.163396   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:09.163453   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:09.195371   72122 cri.go:89] found id: ""
	I0910 19:02:09.195396   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.195407   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:09.195414   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:09.195473   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:09.229338   72122 cri.go:89] found id: ""
	I0910 19:02:09.229361   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.229370   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:09.229376   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:09.229432   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:09.262822   72122 cri.go:89] found id: ""
	I0910 19:02:09.262847   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.262857   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:09.262874   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:09.262891   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:09.330079   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:09.330103   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:09.330119   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:09.408969   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:09.409003   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:09.447666   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:09.447702   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:09.501111   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:09.501141   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.016407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:12.030822   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:12.030905   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:12.069191   72122 cri.go:89] found id: ""
	I0910 19:02:12.069218   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.069229   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:12.069236   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:12.069306   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:12.103687   72122 cri.go:89] found id: ""
	I0910 19:02:12.103726   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.103737   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:12.103862   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:12.103937   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:12.142891   72122 cri.go:89] found id: ""
	I0910 19:02:12.142920   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.142932   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:12.142940   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:12.142998   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:12.178966   72122 cri.go:89] found id: ""
	I0910 19:02:12.178991   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.179002   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:12.179010   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:12.179069   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:12.216070   72122 cri.go:89] found id: ""
	I0910 19:02:12.216093   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.216104   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:12.216112   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:12.216161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:12.251447   72122 cri.go:89] found id: ""
	I0910 19:02:12.251479   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.251492   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:12.251500   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:12.251568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:12.284640   72122 cri.go:89] found id: ""
	I0910 19:02:12.284666   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.284677   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:12.284682   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:12.284743   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:12.319601   72122 cri.go:89] found id: ""
	I0910 19:02:12.319625   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.319632   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:12.319639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:12.319650   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:12.372932   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:12.372965   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.387204   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:12.387228   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:12.459288   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:12.459308   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:12.459323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:13.223679   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:15.224341   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:14.034036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.533341   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:13.667258   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.164610   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:12.549161   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:12.549198   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:15.092557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:15.105391   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:15.105456   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:15.139486   72122 cri.go:89] found id: ""
	I0910 19:02:15.139515   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.139524   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:15.139530   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:15.139591   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:15.173604   72122 cri.go:89] found id: ""
	I0910 19:02:15.173630   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.173641   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:15.173648   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:15.173710   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:15.208464   72122 cri.go:89] found id: ""
	I0910 19:02:15.208492   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.208503   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:15.208510   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:15.208568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:15.247536   72122 cri.go:89] found id: ""
	I0910 19:02:15.247567   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.247579   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:15.247586   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:15.247650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:15.285734   72122 cri.go:89] found id: ""
	I0910 19:02:15.285764   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.285775   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:15.285782   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:15.285858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:15.320755   72122 cri.go:89] found id: ""
	I0910 19:02:15.320782   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.320792   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:15.320798   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:15.320849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:15.357355   72122 cri.go:89] found id: ""
	I0910 19:02:15.357384   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.357395   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:15.357402   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:15.357463   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:15.392105   72122 cri.go:89] found id: ""
	I0910 19:02:15.392130   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.392137   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:15.392149   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:15.392160   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:15.444433   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:15.444465   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:15.458759   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:15.458784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:15.523490   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:15.523507   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:15.523524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:15.607584   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:15.607616   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:17.224472   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:19.723953   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.534545   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.667949   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.669762   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.146611   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:18.160311   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:18.160378   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:18.195072   72122 cri.go:89] found id: ""
	I0910 19:02:18.195099   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.195109   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:18.195127   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:18.195201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:18.230099   72122 cri.go:89] found id: ""
	I0910 19:02:18.230129   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.230138   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:18.230145   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:18.230201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:18.268497   72122 cri.go:89] found id: ""
	I0910 19:02:18.268525   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.268534   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:18.268539   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:18.268599   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:18.304929   72122 cri.go:89] found id: ""
	I0910 19:02:18.304966   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.304978   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:18.304985   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:18.305048   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:18.339805   72122 cri.go:89] found id: ""
	I0910 19:02:18.339839   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.339861   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:18.339868   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:18.339925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:18.378353   72122 cri.go:89] found id: ""
	I0910 19:02:18.378372   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.378381   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:18.378393   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:18.378438   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:18.415175   72122 cri.go:89] found id: ""
	I0910 19:02:18.415195   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.415203   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:18.415208   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:18.415262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:18.450738   72122 cri.go:89] found id: ""
	I0910 19:02:18.450762   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.450769   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:18.450778   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:18.450793   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:18.530943   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:18.530975   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:18.568983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:18.569021   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:18.622301   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:18.622336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:18.635788   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:18.635815   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:18.715729   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.216082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:21.229419   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:21.229488   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:21.265152   72122 cri.go:89] found id: ""
	I0910 19:02:21.265183   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.265193   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:21.265201   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:21.265262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:21.300766   72122 cri.go:89] found id: ""
	I0910 19:02:21.300797   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.300815   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:21.300823   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:21.300883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:21.333416   72122 cri.go:89] found id: ""
	I0910 19:02:21.333443   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.333452   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:21.333460   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:21.333526   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:21.371112   72122 cri.go:89] found id: ""
	I0910 19:02:21.371142   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.371150   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:21.371156   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:21.371214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:21.405657   72122 cri.go:89] found id: ""
	I0910 19:02:21.405684   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.405695   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:21.405703   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:21.405755   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:21.440354   72122 cri.go:89] found id: ""
	I0910 19:02:21.440381   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.440392   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:21.440400   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:21.440464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:21.480165   72122 cri.go:89] found id: ""
	I0910 19:02:21.480189   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.480199   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:21.480206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:21.480273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:21.518422   72122 cri.go:89] found id: ""
	I0910 19:02:21.518449   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.518459   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:21.518470   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:21.518486   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:21.572263   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:21.572300   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:21.588179   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:21.588204   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:21.658330   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.658356   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:21.658371   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:21.743026   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:21.743063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:21.724730   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.724844   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:26.225026   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.034593   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.037588   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.164712   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.664475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:24.286604   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:24.299783   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:24.299847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:24.336998   72122 cri.go:89] found id: ""
	I0910 19:02:24.337031   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.337042   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:24.337050   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:24.337123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:24.374198   72122 cri.go:89] found id: ""
	I0910 19:02:24.374223   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.374231   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:24.374236   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:24.374289   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:24.407783   72122 cri.go:89] found id: ""
	I0910 19:02:24.407812   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.407822   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:24.407828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:24.407881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:24.443285   72122 cri.go:89] found id: ""
	I0910 19:02:24.443307   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.443315   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:24.443321   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:24.443367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:24.477176   72122 cri.go:89] found id: ""
	I0910 19:02:24.477198   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.477206   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:24.477212   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:24.477266   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:24.509762   72122 cri.go:89] found id: ""
	I0910 19:02:24.509783   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.509791   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:24.509797   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:24.509858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:24.548746   72122 cri.go:89] found id: ""
	I0910 19:02:24.548775   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.548785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:24.548793   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:24.548851   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:24.583265   72122 cri.go:89] found id: ""
	I0910 19:02:24.583297   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.583313   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:24.583324   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:24.583338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:24.634966   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:24.635001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:24.649844   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:24.649869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:24.721795   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:24.721824   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:24.721840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:24.807559   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:24.807593   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.352779   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:27.366423   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:27.366495   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:27.399555   72122 cri.go:89] found id: ""
	I0910 19:02:27.399582   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.399591   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:27.399596   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:27.399662   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:27.434151   72122 cri.go:89] found id: ""
	I0910 19:02:27.434179   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.434188   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:27.434194   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:27.434265   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:27.467053   72122 cri.go:89] found id: ""
	I0910 19:02:27.467081   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.467092   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:27.467099   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:27.467156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:28.724149   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:31.224185   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.533697   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:29.533815   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.034343   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.667816   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:30.164174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.500999   72122 cri.go:89] found id: ""
	I0910 19:02:27.501030   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.501039   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:27.501044   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:27.501114   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:27.537981   72122 cri.go:89] found id: ""
	I0910 19:02:27.538000   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.538007   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:27.538012   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:27.538060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:27.568622   72122 cri.go:89] found id: ""
	I0910 19:02:27.568649   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.568660   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:27.568668   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:27.568724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:27.603035   72122 cri.go:89] found id: ""
	I0910 19:02:27.603058   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.603067   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:27.603072   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:27.603131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:27.637624   72122 cri.go:89] found id: ""
	I0910 19:02:27.637651   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.637662   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:27.637673   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:27.637693   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:27.651893   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:27.651915   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:27.723949   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:27.723969   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:27.723983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:27.801463   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:27.801496   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.841969   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:27.842000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.398857   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:30.412720   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:30.412790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:30.448125   72122 cri.go:89] found id: ""
	I0910 19:02:30.448152   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.448163   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:30.448171   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:30.448234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:30.481988   72122 cri.go:89] found id: ""
	I0910 19:02:30.482016   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.482027   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:30.482035   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:30.482083   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:30.516548   72122 cri.go:89] found id: ""
	I0910 19:02:30.516576   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.516583   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:30.516589   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:30.516646   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:30.566884   72122 cri.go:89] found id: ""
	I0910 19:02:30.566910   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.566918   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:30.566923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:30.566975   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:30.602278   72122 cri.go:89] found id: ""
	I0910 19:02:30.602306   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.602314   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:30.602319   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:30.602379   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:30.636708   72122 cri.go:89] found id: ""
	I0910 19:02:30.636732   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.636740   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:30.636745   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:30.636797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:30.681255   72122 cri.go:89] found id: ""
	I0910 19:02:30.681280   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.681295   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:30.681303   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:30.681361   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:30.715516   72122 cri.go:89] found id: ""
	I0910 19:02:30.715543   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.715551   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:30.715560   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:30.715572   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.768916   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:30.768948   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:30.783318   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:30.783348   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:30.852901   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:30.852925   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:30.852940   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:30.932276   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:30.932314   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.725716   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.223970   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:34.533148   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.533854   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.667516   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:35.164375   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:33.471931   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:33.486152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:33.486211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:33.524130   72122 cri.go:89] found id: ""
	I0910 19:02:33.524161   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.524173   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:33.524180   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:33.524238   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:33.562216   72122 cri.go:89] found id: ""
	I0910 19:02:33.562238   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.562247   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:33.562252   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:33.562305   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:33.596587   72122 cri.go:89] found id: ""
	I0910 19:02:33.596615   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.596626   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:33.596634   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:33.596692   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:33.633307   72122 cri.go:89] found id: ""
	I0910 19:02:33.633330   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.633338   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:33.633343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:33.633403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:33.667780   72122 cri.go:89] found id: ""
	I0910 19:02:33.667805   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.667815   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:33.667820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:33.667878   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:33.702406   72122 cri.go:89] found id: ""
	I0910 19:02:33.702436   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.702447   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:33.702456   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:33.702524   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:33.744544   72122 cri.go:89] found id: ""
	I0910 19:02:33.744574   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.744581   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:33.744587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:33.744661   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:33.782000   72122 cri.go:89] found id: ""
	I0910 19:02:33.782024   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.782032   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:33.782040   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:33.782053   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:33.858087   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:33.858115   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:33.858133   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:33.943238   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:33.943278   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.987776   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:33.987804   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:34.043197   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:34.043232   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.558122   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:36.571125   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:36.571195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:36.606195   72122 cri.go:89] found id: ""
	I0910 19:02:36.606228   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.606239   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:36.606246   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:36.606304   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:36.640248   72122 cri.go:89] found id: ""
	I0910 19:02:36.640290   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.640302   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:36.640310   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:36.640360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:36.676916   72122 cri.go:89] found id: ""
	I0910 19:02:36.676942   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.676952   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:36.676958   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:36.677013   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:36.713183   72122 cri.go:89] found id: ""
	I0910 19:02:36.713207   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.713218   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:36.713225   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:36.713283   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:36.750748   72122 cri.go:89] found id: ""
	I0910 19:02:36.750775   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.750782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:36.750787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:36.750847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:36.782614   72122 cri.go:89] found id: ""
	I0910 19:02:36.782636   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.782644   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:36.782650   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:36.782709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:36.822051   72122 cri.go:89] found id: ""
	I0910 19:02:36.822083   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.822094   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:36.822102   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:36.822161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:36.856068   72122 cri.go:89] found id: ""
	I0910 19:02:36.856096   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.856106   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:36.856117   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:36.856132   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:36.909586   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:36.909625   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.931649   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:36.931676   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:37.040146   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:37.040175   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:37.040194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:37.121902   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:37.121942   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:38.723762   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:40.723880   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:38.534001   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:41.035356   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:37.665212   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.668115   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.164118   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.665474   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:39.678573   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:39.678633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:39.712755   72122 cri.go:89] found id: ""
	I0910 19:02:39.712783   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.712793   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:39.712800   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:39.712857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:39.744709   72122 cri.go:89] found id: ""
	I0910 19:02:39.744738   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.744748   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:39.744756   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:39.744809   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:39.780161   72122 cri.go:89] found id: ""
	I0910 19:02:39.780189   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.780200   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:39.780207   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:39.780255   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:39.817665   72122 cri.go:89] found id: ""
	I0910 19:02:39.817695   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.817704   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:39.817709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:39.817757   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:39.857255   72122 cri.go:89] found id: ""
	I0910 19:02:39.857291   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.857299   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:39.857306   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:39.857381   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:39.893514   72122 cri.go:89] found id: ""
	I0910 19:02:39.893540   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.893550   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:39.893558   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:39.893614   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:39.932720   72122 cri.go:89] found id: ""
	I0910 19:02:39.932753   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.932767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:39.932775   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:39.932835   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:39.977063   72122 cri.go:89] found id: ""
	I0910 19:02:39.977121   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.977135   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:39.977146   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:39.977168   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:39.991414   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:39.991445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:40.066892   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:40.066910   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:40.066922   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:40.150648   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:40.150680   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:40.198519   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:40.198561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:42.724332   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.223804   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:43.533841   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.534665   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:44.164851   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:46.165259   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.749906   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:42.769633   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:42.769703   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:42.812576   72122 cri.go:89] found id: ""
	I0910 19:02:42.812603   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.812613   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:42.812620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:42.812682   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:42.846233   72122 cri.go:89] found id: ""
	I0910 19:02:42.846257   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.846266   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:42.846271   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:42.846326   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:42.883564   72122 cri.go:89] found id: ""
	I0910 19:02:42.883593   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.883605   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:42.883612   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:42.883669   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:42.920774   72122 cri.go:89] found id: ""
	I0910 19:02:42.920801   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.920813   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:42.920820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:42.920883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:42.953776   72122 cri.go:89] found id: ""
	I0910 19:02:42.953808   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.953820   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:42.953829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:42.953887   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:42.989770   72122 cri.go:89] found id: ""
	I0910 19:02:42.989806   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.989820   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:42.989829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:42.989893   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:43.022542   72122 cri.go:89] found id: ""
	I0910 19:02:43.022567   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.022574   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:43.022580   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:43.022629   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:43.064308   72122 cri.go:89] found id: ""
	I0910 19:02:43.064329   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.064337   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:43.064344   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:43.064356   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:43.120212   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:43.120243   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:43.134269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:43.134296   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:43.218840   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:43.218865   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:43.218880   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:43.302560   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:43.302591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:45.842788   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:45.857495   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:45.857557   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:45.892745   72122 cri.go:89] found id: ""
	I0910 19:02:45.892772   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.892782   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:45.892790   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:45.892888   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:45.928451   72122 cri.go:89] found id: ""
	I0910 19:02:45.928476   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.928486   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:45.928493   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:45.928551   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:45.962868   72122 cri.go:89] found id: ""
	I0910 19:02:45.962899   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.962910   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:45.962918   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:45.962979   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:45.996975   72122 cri.go:89] found id: ""
	I0910 19:02:45.997000   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.997009   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:45.997014   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:45.997065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:46.032271   72122 cri.go:89] found id: ""
	I0910 19:02:46.032299   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.032309   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:46.032317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:46.032375   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:46.072629   72122 cri.go:89] found id: ""
	I0910 19:02:46.072654   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.072662   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:46.072667   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:46.072713   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:46.112196   72122 cri.go:89] found id: ""
	I0910 19:02:46.112220   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.112228   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:46.112233   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:46.112298   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:46.155700   72122 cri.go:89] found id: ""
	I0910 19:02:46.155732   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.155745   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:46.155759   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:46.155794   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:46.210596   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:46.210624   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:46.224951   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:46.224980   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:46.294571   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:46.294597   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:46.294613   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:46.382431   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:46.382495   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:47.224808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:49.225392   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:51.227601   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.033643   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.535490   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.665543   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.666596   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.926582   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:48.941256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:48.941338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:48.979810   72122 cri.go:89] found id: ""
	I0910 19:02:48.979842   72122 logs.go:276] 0 containers: []
	W0910 19:02:48.979849   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:48.979856   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:48.979917   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:49.015083   72122 cri.go:89] found id: ""
	I0910 19:02:49.015126   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.015136   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:49.015144   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:49.015205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:49.052417   72122 cri.go:89] found id: ""
	I0910 19:02:49.052445   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.052453   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:49.052459   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:49.052511   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:49.092485   72122 cri.go:89] found id: ""
	I0910 19:02:49.092523   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.092533   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:49.092538   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:49.092588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:49.127850   72122 cri.go:89] found id: ""
	I0910 19:02:49.127882   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.127889   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:49.127897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:49.127952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:49.160693   72122 cri.go:89] found id: ""
	I0910 19:02:49.160724   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.160733   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:49.160740   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:49.160798   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:49.194713   72122 cri.go:89] found id: ""
	I0910 19:02:49.194737   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.194744   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:49.194750   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:49.194804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:49.229260   72122 cri.go:89] found id: ""
	I0910 19:02:49.229283   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.229292   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:49.229303   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:49.229320   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:49.281963   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:49.281992   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:49.294789   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:49.294809   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:49.366126   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:49.366152   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:49.366172   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:49.451187   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:49.451225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:51.990361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:52.003744   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:52.003807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:52.036794   72122 cri.go:89] found id: ""
	I0910 19:02:52.036824   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.036834   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:52.036840   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:52.036896   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:52.074590   72122 cri.go:89] found id: ""
	I0910 19:02:52.074613   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.074620   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:52.074625   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:52.074675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:52.119926   72122 cri.go:89] found id: ""
	I0910 19:02:52.119967   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.119981   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:52.119990   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:52.120075   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:52.157862   72122 cri.go:89] found id: ""
	I0910 19:02:52.157889   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.157900   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:52.157906   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:52.157968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:52.198645   72122 cri.go:89] found id: ""
	I0910 19:02:52.198675   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.198686   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:52.198693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:52.198756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:52.240091   72122 cri.go:89] found id: ""
	I0910 19:02:52.240113   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.240129   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:52.240139   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:52.240197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:52.275046   72122 cri.go:89] found id: ""
	I0910 19:02:52.275079   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.275090   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:52.275098   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:52.275179   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:52.311141   72122 cri.go:89] found id: ""
	I0910 19:02:52.311172   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.311184   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:52.311196   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:52.311211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:52.400004   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:52.400039   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:52.449043   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:52.449090   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:53.724151   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:56.223353   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.033328   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.035259   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.164639   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.165714   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:52.502304   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:52.502336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:52.518747   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:52.518772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:52.593581   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.094092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:55.108752   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:55.108830   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:55.143094   72122 cri.go:89] found id: ""
	I0910 19:02:55.143122   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.143133   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:55.143141   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:55.143198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:55.184298   72122 cri.go:89] found id: ""
	I0910 19:02:55.184326   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.184334   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:55.184340   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:55.184397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:55.216557   72122 cri.go:89] found id: ""
	I0910 19:02:55.216585   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.216596   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:55.216613   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:55.216676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:55.251049   72122 cri.go:89] found id: ""
	I0910 19:02:55.251075   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.251083   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:55.251090   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:55.251152   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:55.282689   72122 cri.go:89] found id: ""
	I0910 19:02:55.282716   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.282724   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:55.282729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:55.282800   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:55.316959   72122 cri.go:89] found id: ""
	I0910 19:02:55.316993   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.317004   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:55.317011   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:55.317085   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:55.353110   72122 cri.go:89] found id: ""
	I0910 19:02:55.353134   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.353143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:55.353149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:55.353205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:55.392391   72122 cri.go:89] found id: ""
	I0910 19:02:55.392422   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.392434   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:55.392446   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:55.392461   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:55.445431   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:55.445469   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:55.459348   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:55.459374   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:55.528934   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.528957   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:55.528973   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:55.610797   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:55.610833   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:58.223882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.223951   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.533754   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:59.535018   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:01.535255   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.667276   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.164510   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:58.152775   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:58.166383   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:58.166440   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:58.203198   72122 cri.go:89] found id: ""
	I0910 19:02:58.203225   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.203233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:58.203239   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:58.203284   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:58.240538   72122 cri.go:89] found id: ""
	I0910 19:02:58.240560   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.240567   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:58.240573   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:58.240633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:58.274802   72122 cri.go:89] found id: ""
	I0910 19:02:58.274826   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.274833   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:58.274839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:58.274886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:58.311823   72122 cri.go:89] found id: ""
	I0910 19:02:58.311857   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.311868   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:58.311876   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:58.311933   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:58.347260   72122 cri.go:89] found id: ""
	I0910 19:02:58.347281   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.347288   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:58.347294   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:58.347338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:58.382621   72122 cri.go:89] found id: ""
	I0910 19:02:58.382645   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.382655   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:58.382662   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:58.382720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:58.418572   72122 cri.go:89] found id: ""
	I0910 19:02:58.418597   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.418605   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:58.418611   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:58.418663   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:58.459955   72122 cri.go:89] found id: ""
	I0910 19:02:58.459987   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.459995   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:58.460003   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:58.460016   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:58.512831   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:58.512866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:58.527036   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:58.527067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:58.593329   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:58.593350   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:58.593366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:58.671171   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:58.671201   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.211905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:01.226567   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:01.226724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:01.261860   72122 cri.go:89] found id: ""
	I0910 19:03:01.261885   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.261893   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:01.261898   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:01.261946   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:01.294754   72122 cri.go:89] found id: ""
	I0910 19:03:01.294774   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.294781   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:01.294786   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:01.294833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:01.328378   72122 cri.go:89] found id: ""
	I0910 19:03:01.328403   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.328412   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:01.328417   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:01.328465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:01.363344   72122 cri.go:89] found id: ""
	I0910 19:03:01.363370   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.363380   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:01.363388   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:01.363446   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:01.398539   72122 cri.go:89] found id: ""
	I0910 19:03:01.398576   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.398586   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:01.398593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:01.398654   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:01.431367   72122 cri.go:89] found id: ""
	I0910 19:03:01.431390   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.431397   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:01.431403   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:01.431458   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:01.464562   72122 cri.go:89] found id: ""
	I0910 19:03:01.464589   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.464599   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:01.464606   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:01.464666   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:01.497493   72122 cri.go:89] found id: ""
	I0910 19:03:01.497520   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.497531   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:01.497540   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:01.497555   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:01.583083   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:01.583140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.624887   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:01.624919   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:01.676124   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:01.676155   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:01.690861   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:01.690894   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:01.763695   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:02.724017   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.725049   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.033371   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:06.033600   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:02.666137   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.669740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.164822   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.264867   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:04.279106   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:04.279176   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:04.315358   72122 cri.go:89] found id: ""
	I0910 19:03:04.315390   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.315398   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:04.315403   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:04.315457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:04.359466   72122 cri.go:89] found id: ""
	I0910 19:03:04.359489   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.359496   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:04.359504   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:04.359563   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:04.399504   72122 cri.go:89] found id: ""
	I0910 19:03:04.399529   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.399538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:04.399545   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:04.399604   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:04.438244   72122 cri.go:89] found id: ""
	I0910 19:03:04.438269   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.438277   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:04.438282   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:04.438340   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:04.475299   72122 cri.go:89] found id: ""
	I0910 19:03:04.475321   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.475329   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:04.475334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:04.475386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:04.516500   72122 cri.go:89] found id: ""
	I0910 19:03:04.516520   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.516529   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:04.516534   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:04.516588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:04.551191   72122 cri.go:89] found id: ""
	I0910 19:03:04.551214   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.551222   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:04.551228   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:04.551273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:04.585646   72122 cri.go:89] found id: ""
	I0910 19:03:04.585667   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.585675   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:04.585684   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:04.585699   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:04.598832   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:04.598858   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:04.670117   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:04.670140   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:04.670156   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:04.746592   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:04.746626   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:04.784061   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:04.784088   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.337082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:07.350696   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:07.350752   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:07.387344   72122 cri.go:89] found id: ""
	I0910 19:03:07.387373   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.387384   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:07.387391   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:07.387449   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:07.420468   72122 cri.go:89] found id: ""
	I0910 19:03:07.420490   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.420498   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:07.420503   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:07.420566   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:07.453746   72122 cri.go:89] found id: ""
	I0910 19:03:07.453773   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.453784   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:07.453791   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:07.453845   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:07.487359   72122 cri.go:89] found id: ""
	I0910 19:03:07.487388   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.487400   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:07.487407   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:07.487470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:07.223432   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.723164   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:08.033767   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:10.035613   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.165972   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:11.663740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.520803   72122 cri.go:89] found id: ""
	I0910 19:03:07.520827   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.520834   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:07.520839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:07.520898   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:07.556908   72122 cri.go:89] found id: ""
	I0910 19:03:07.556934   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.556945   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:07.556953   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:07.557017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:07.596072   72122 cri.go:89] found id: ""
	I0910 19:03:07.596093   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.596102   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:07.596107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:07.596165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:07.631591   72122 cri.go:89] found id: ""
	I0910 19:03:07.631620   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.631630   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:07.631639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:07.631661   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.683892   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:07.683923   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:07.697619   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:07.697645   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:07.766370   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:07.766397   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:07.766413   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:07.854102   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:07.854140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:10.400185   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:10.412771   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:10.412842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:10.447710   72122 cri.go:89] found id: ""
	I0910 19:03:10.447739   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.447750   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:10.447757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:10.447822   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:10.480865   72122 cri.go:89] found id: ""
	I0910 19:03:10.480892   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.480902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:10.480909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:10.480966   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:10.514893   72122 cri.go:89] found id: ""
	I0910 19:03:10.514919   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.514927   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:10.514933   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:10.514994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:10.556332   72122 cri.go:89] found id: ""
	I0910 19:03:10.556374   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.556385   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:10.556392   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:10.556457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:10.590529   72122 cri.go:89] found id: ""
	I0910 19:03:10.590562   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.590573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:10.590581   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:10.590642   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:10.623697   72122 cri.go:89] found id: ""
	I0910 19:03:10.623724   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.623732   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:10.623737   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:10.623788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:10.659236   72122 cri.go:89] found id: ""
	I0910 19:03:10.659259   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.659270   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:10.659277   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:10.659338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:10.693150   72122 cri.go:89] found id: ""
	I0910 19:03:10.693182   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.693192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:10.693202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:10.693217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:10.744624   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:10.744663   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:10.758797   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:10.758822   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:10.853796   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:10.853815   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:10.853827   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:10.937972   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:10.938019   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:11.724808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:14.224052   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:12.535134   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:15.033867   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:17.034507   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.667548   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:16.164483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.481898   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:13.495440   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:13.495505   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:13.531423   72122 cri.go:89] found id: ""
	I0910 19:03:13.531452   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.531463   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:13.531470   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:13.531532   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:13.571584   72122 cri.go:89] found id: ""
	I0910 19:03:13.571607   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.571615   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:13.571620   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:13.571674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:13.609670   72122 cri.go:89] found id: ""
	I0910 19:03:13.609695   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.609702   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:13.609707   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:13.609761   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:13.644726   72122 cri.go:89] found id: ""
	I0910 19:03:13.644755   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.644766   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:13.644773   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:13.644831   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:13.679692   72122 cri.go:89] found id: ""
	I0910 19:03:13.679722   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.679733   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:13.679741   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:13.679791   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:13.717148   72122 cri.go:89] found id: ""
	I0910 19:03:13.717177   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.717186   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:13.717192   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:13.717247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:13.755650   72122 cri.go:89] found id: ""
	I0910 19:03:13.755676   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.755688   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:13.755693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:13.755740   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:13.788129   72122 cri.go:89] found id: ""
	I0910 19:03:13.788158   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.788169   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:13.788179   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:13.788194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:13.865241   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:13.865277   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:13.909205   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:13.909233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:13.963495   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:13.963523   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:13.977311   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:13.977337   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:14.047015   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:16.547505   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:16.568333   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:16.568412   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:16.610705   72122 cri.go:89] found id: ""
	I0910 19:03:16.610734   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.610744   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:16.610752   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:16.610808   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:16.647307   72122 cri.go:89] found id: ""
	I0910 19:03:16.647333   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.647340   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:16.647345   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:16.647409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:16.684513   72122 cri.go:89] found id: ""
	I0910 19:03:16.684536   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.684544   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:16.684549   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:16.684602   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:16.718691   72122 cri.go:89] found id: ""
	I0910 19:03:16.718719   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.718729   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:16.718734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:16.718794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:16.753250   72122 cri.go:89] found id: ""
	I0910 19:03:16.753279   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.753291   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:16.753298   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:16.753358   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:16.788953   72122 cri.go:89] found id: ""
	I0910 19:03:16.788984   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.789001   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:16.789009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:16.789084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:16.823715   72122 cri.go:89] found id: ""
	I0910 19:03:16.823746   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.823760   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:16.823767   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:16.823837   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:16.858734   72122 cri.go:89] found id: ""
	I0910 19:03:16.858758   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.858770   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:16.858780   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:16.858795   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:16.897983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:16.898012   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:16.950981   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:16.951015   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:16.964809   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:16.964839   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:17.039142   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:17.039163   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:17.039177   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:16.724218   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.223909   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.533783   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:21.534203   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:18.164708   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:20.664302   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.619941   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:19.634432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:19.634489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:19.671220   72122 cri.go:89] found id: ""
	I0910 19:03:19.671246   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.671256   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:19.671264   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:19.671322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:19.704251   72122 cri.go:89] found id: ""
	I0910 19:03:19.704278   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.704294   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:19.704301   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:19.704347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:19.745366   72122 cri.go:89] found id: ""
	I0910 19:03:19.745393   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.745403   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:19.745410   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:19.745466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:19.781100   72122 cri.go:89] found id: ""
	I0910 19:03:19.781129   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.781136   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:19.781141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:19.781195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:19.817177   72122 cri.go:89] found id: ""
	I0910 19:03:19.817207   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.817219   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:19.817226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:19.817292   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:19.852798   72122 cri.go:89] found id: ""
	I0910 19:03:19.852829   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.852837   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:19.852842   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:19.852889   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:19.887173   72122 cri.go:89] found id: ""
	I0910 19:03:19.887200   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.887210   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:19.887219   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:19.887409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:19.922997   72122 cri.go:89] found id: ""
	I0910 19:03:19.923026   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.923038   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:19.923049   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:19.923063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:19.975703   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:19.975736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:19.989834   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:19.989866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:20.061312   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:20.061332   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:20.061344   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:20.143045   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:20.143080   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:21.723250   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:23.723771   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.724346   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:24.036790   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:26.533830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.664756   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.164650   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.681900   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:22.694860   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:22.694923   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:22.738529   72122 cri.go:89] found id: ""
	I0910 19:03:22.738553   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.738563   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:22.738570   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:22.738640   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:22.778102   72122 cri.go:89] found id: ""
	I0910 19:03:22.778132   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.778143   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:22.778150   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:22.778207   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:22.813273   72122 cri.go:89] found id: ""
	I0910 19:03:22.813307   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.813320   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:22.813334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:22.813397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:22.849613   72122 cri.go:89] found id: ""
	I0910 19:03:22.849637   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.849646   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:22.849651   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:22.849701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:22.883138   72122 cri.go:89] found id: ""
	I0910 19:03:22.883167   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.883178   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:22.883185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:22.883237   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:22.918521   72122 cri.go:89] found id: ""
	I0910 19:03:22.918550   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.918567   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:22.918574   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:22.918632   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:22.966657   72122 cri.go:89] found id: ""
	I0910 19:03:22.966684   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.966691   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:22.966701   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:22.966762   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:23.022254   72122 cri.go:89] found id: ""
	I0910 19:03:23.022282   72122 logs.go:276] 0 containers: []
	W0910 19:03:23.022290   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:23.022298   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:23.022309   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:23.082347   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:23.082386   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:23.096792   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:23.096814   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:23.172720   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:23.172740   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:23.172754   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:23.256155   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:23.256193   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:25.797211   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:25.810175   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:25.810234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:25.844848   72122 cri.go:89] found id: ""
	I0910 19:03:25.844876   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.844886   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:25.844901   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:25.844968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:25.877705   72122 cri.go:89] found id: ""
	I0910 19:03:25.877736   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.877747   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:25.877755   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:25.877807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:25.913210   72122 cri.go:89] found id: ""
	I0910 19:03:25.913238   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.913248   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:25.913256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:25.913316   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:25.947949   72122 cri.go:89] found id: ""
	I0910 19:03:25.947974   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.947984   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:25.947991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:25.948050   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:25.983487   72122 cri.go:89] found id: ""
	I0910 19:03:25.983511   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.983519   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:25.983524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:25.983573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:26.018176   72122 cri.go:89] found id: ""
	I0910 19:03:26.018201   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.018209   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:26.018214   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:26.018271   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:26.052063   72122 cri.go:89] found id: ""
	I0910 19:03:26.052087   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.052097   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:26.052104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:26.052165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:26.091919   72122 cri.go:89] found id: ""
	I0910 19:03:26.091949   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.091958   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:26.091968   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:26.091983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:26.146059   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:26.146094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:26.160529   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:26.160562   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:26.230742   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:26.230764   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:26.230778   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:26.313191   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:26.313222   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:27.724922   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:30.223811   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.039957   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:31.533256   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:27.665626   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.666857   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:32.165153   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:28.858457   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:28.873725   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:28.873788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:28.922685   72122 cri.go:89] found id: ""
	I0910 19:03:28.922717   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.922729   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:28.922737   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:28.922795   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:28.973236   72122 cri.go:89] found id: ""
	I0910 19:03:28.973260   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.973270   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:28.973277   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:28.973339   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:29.008999   72122 cri.go:89] found id: ""
	I0910 19:03:29.009049   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.009062   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:29.009081   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:29.009148   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:29.049009   72122 cri.go:89] found id: ""
	I0910 19:03:29.049037   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.049047   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:29.049056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:29.049131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:29.089543   72122 cri.go:89] found id: ""
	I0910 19:03:29.089564   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.089573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:29.089578   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:29.089648   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:29.126887   72122 cri.go:89] found id: ""
	I0910 19:03:29.126911   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.126918   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:29.126924   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:29.126969   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:29.161369   72122 cri.go:89] found id: ""
	I0910 19:03:29.161395   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.161405   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:29.161412   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:29.161474   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:29.199627   72122 cri.go:89] found id: ""
	I0910 19:03:29.199652   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.199661   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:29.199672   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:29.199691   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:29.268353   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:29.268386   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:29.268401   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:29.351470   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:29.351504   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:29.391768   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:29.391796   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:29.442705   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:29.442736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:31.957567   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:31.970218   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:31.970274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:32.004870   72122 cri.go:89] found id: ""
	I0910 19:03:32.004898   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.004908   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:32.004915   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:32.004971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:32.045291   72122 cri.go:89] found id: ""
	I0910 19:03:32.045322   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.045331   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:32.045337   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:32.045403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:32.085969   72122 cri.go:89] found id: ""
	I0910 19:03:32.085999   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.086007   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:32.086013   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:32.086067   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:32.120100   72122 cri.go:89] found id: ""
	I0910 19:03:32.120127   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.120135   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:32.120141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:32.120187   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:32.153977   72122 cri.go:89] found id: ""
	I0910 19:03:32.154004   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.154011   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:32.154016   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:32.154065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:32.195980   72122 cri.go:89] found id: ""
	I0910 19:03:32.196005   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.196013   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:32.196019   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:32.196068   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:32.233594   72122 cri.go:89] found id: ""
	I0910 19:03:32.233616   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.233623   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:32.233632   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:32.233677   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:32.268118   72122 cri.go:89] found id: ""
	I0910 19:03:32.268144   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.268152   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:32.268160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:32.268171   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:32.281389   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:32.281416   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:32.359267   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:32.359289   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:32.359304   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:32.445096   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:32.445137   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:32.483288   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:32.483325   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:32.224155   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.724191   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:33.537955   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.033801   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.663475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.665627   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:35.040393   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:35.053698   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:35.053750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:35.087712   72122 cri.go:89] found id: ""
	I0910 19:03:35.087742   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.087751   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:35.087757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:35.087802   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:35.125437   72122 cri.go:89] found id: ""
	I0910 19:03:35.125468   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.125482   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:35.125495   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:35.125562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:35.163885   72122 cri.go:89] found id: ""
	I0910 19:03:35.163914   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.163924   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:35.163931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:35.163989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:35.199426   72122 cri.go:89] found id: ""
	I0910 19:03:35.199459   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.199471   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:35.199479   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:35.199559   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:35.236388   72122 cri.go:89] found id: ""
	I0910 19:03:35.236408   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.236416   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:35.236421   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:35.236465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:35.274797   72122 cri.go:89] found id: ""
	I0910 19:03:35.274817   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.274825   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:35.274830   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:35.274874   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:35.308127   72122 cri.go:89] found id: ""
	I0910 19:03:35.308155   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.308166   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:35.308173   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:35.308230   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:35.340675   72122 cri.go:89] found id: ""
	I0910 19:03:35.340697   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.340704   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:35.340712   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:35.340727   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:35.390806   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:35.390842   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:35.404427   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:35.404458   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:35.471526   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:35.471560   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:35.471575   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:35.547469   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:35.547497   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:37.223464   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:39.224137   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.534280   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:40.534728   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.666077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.165483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.087127   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:38.100195   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:38.100251   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:38.135386   72122 cri.go:89] found id: ""
	I0910 19:03:38.135408   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.135416   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:38.135422   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:38.135480   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:38.168531   72122 cri.go:89] found id: ""
	I0910 19:03:38.168558   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.168568   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:38.168577   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:38.168639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:38.202931   72122 cri.go:89] found id: ""
	I0910 19:03:38.202958   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.202968   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:38.202974   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:38.203030   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:38.239185   72122 cri.go:89] found id: ""
	I0910 19:03:38.239209   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.239219   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:38.239226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:38.239279   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:38.276927   72122 cri.go:89] found id: ""
	I0910 19:03:38.276952   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.276961   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:38.276967   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:38.277035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:38.311923   72122 cri.go:89] found id: ""
	I0910 19:03:38.311951   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.311962   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:38.311971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:38.312034   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:38.344981   72122 cri.go:89] found id: ""
	I0910 19:03:38.345012   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.345023   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:38.345030   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:38.345099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:38.378012   72122 cri.go:89] found id: ""
	I0910 19:03:38.378037   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.378048   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:38.378058   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:38.378076   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:38.449361   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:38.449384   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:38.449396   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:38.530683   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:38.530713   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:38.570047   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:38.570073   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:38.620143   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:38.620176   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.134152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:41.148416   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:41.148509   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:41.186681   72122 cri.go:89] found id: ""
	I0910 19:03:41.186706   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.186713   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:41.186719   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:41.186767   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:41.221733   72122 cri.go:89] found id: ""
	I0910 19:03:41.221758   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.221769   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:41.221776   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:41.221834   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:41.256099   72122 cri.go:89] found id: ""
	I0910 19:03:41.256125   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.256136   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:41.256143   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:41.256194   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:41.289825   72122 cri.go:89] found id: ""
	I0910 19:03:41.289850   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.289860   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:41.289867   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:41.289926   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:41.323551   72122 cri.go:89] found id: ""
	I0910 19:03:41.323581   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.323594   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:41.323601   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:41.323659   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:41.356508   72122 cri.go:89] found id: ""
	I0910 19:03:41.356535   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.356546   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:41.356553   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:41.356608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:41.391556   72122 cri.go:89] found id: ""
	I0910 19:03:41.391579   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.391586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:41.391592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:41.391651   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:41.427685   72122 cri.go:89] found id: ""
	I0910 19:03:41.427711   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.427726   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:41.427743   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:41.427758   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:41.481970   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:41.482001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.495266   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:41.495290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:41.568334   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:41.568357   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:41.568370   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:41.650178   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:41.650211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:43.724494   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:46.223803   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.034100   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.035091   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.167877   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.664633   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:44.193665   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:44.209118   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:44.209197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:44.245792   72122 cri.go:89] found id: ""
	I0910 19:03:44.245819   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.245829   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:44.245834   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:44.245900   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:44.285673   72122 cri.go:89] found id: ""
	I0910 19:03:44.285699   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.285711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:44.285719   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:44.285787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:44.326471   72122 cri.go:89] found id: ""
	I0910 19:03:44.326495   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.326505   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:44.326520   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:44.326589   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:44.367864   72122 cri.go:89] found id: ""
	I0910 19:03:44.367890   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.367898   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:44.367907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:44.367954   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:44.407161   72122 cri.go:89] found id: ""
	I0910 19:03:44.407185   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.407193   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:44.407198   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:44.407256   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:44.446603   72122 cri.go:89] found id: ""
	I0910 19:03:44.446628   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.446638   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:44.446645   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:44.446705   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:44.486502   72122 cri.go:89] found id: ""
	I0910 19:03:44.486526   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.486536   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:44.486543   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:44.486605   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:44.524992   72122 cri.go:89] found id: ""
	I0910 19:03:44.525017   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.525025   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:44.525033   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:44.525044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:44.579387   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:44.579418   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:44.594045   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:44.594070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:44.678857   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:44.678883   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:44.678897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:44.763799   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:44.763830   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:47.305631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:47.319275   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:47.319347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:47.359199   72122 cri.go:89] found id: ""
	I0910 19:03:47.359222   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.359233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:47.359240   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:47.359300   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:47.397579   72122 cri.go:89] found id: ""
	I0910 19:03:47.397602   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.397610   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:47.397616   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:47.397674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:47.431114   72122 cri.go:89] found id: ""
	I0910 19:03:47.431138   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.431146   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:47.431151   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:47.431205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:47.470475   72122 cri.go:89] found id: ""
	I0910 19:03:47.470499   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.470509   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:47.470515   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:47.470573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:48.227625   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.725421   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.534967   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:49.027864   71529 pod_ready.go:82] duration metric: took 4m0.000448579s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:49.027890   71529 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0910 19:03:49.027905   71529 pod_ready.go:39] duration metric: took 4m14.536052937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:49.027929   71529 kubeadm.go:597] duration metric: took 4m22.283340761s to restartPrimaryControlPlane
	W0910 19:03:49.027982   71529 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:03:49.028009   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:03:47.668029   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.164077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.504484   72122 cri.go:89] found id: ""
	I0910 19:03:47.504509   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.504518   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:47.504524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:47.504577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:47.541633   72122 cri.go:89] found id: ""
	I0910 19:03:47.541651   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.541658   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:47.541663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:47.541706   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:47.579025   72122 cri.go:89] found id: ""
	I0910 19:03:47.579051   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.579060   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:47.579068   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:47.579123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:47.612333   72122 cri.go:89] found id: ""
	I0910 19:03:47.612359   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.612370   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:47.612380   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:47.612395   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:47.667214   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:47.667242   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:47.683425   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:47.683466   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:47.749510   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:47.749531   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:47.749543   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:47.830454   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:47.830487   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:50.373207   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:50.387191   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:50.387247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:50.422445   72122 cri.go:89] found id: ""
	I0910 19:03:50.422476   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.422488   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:50.422495   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:50.422562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:50.456123   72122 cri.go:89] found id: ""
	I0910 19:03:50.456145   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.456153   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:50.456157   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:50.456211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:50.488632   72122 cri.go:89] found id: ""
	I0910 19:03:50.488661   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.488672   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:50.488680   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:50.488736   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:50.523603   72122 cri.go:89] found id: ""
	I0910 19:03:50.523628   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.523636   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:50.523641   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:50.523699   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:50.559741   72122 cri.go:89] found id: ""
	I0910 19:03:50.559765   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.559773   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:50.559778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:50.559842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:50.595387   72122 cri.go:89] found id: ""
	I0910 19:03:50.595406   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.595414   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:50.595420   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:50.595472   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:50.628720   72122 cri.go:89] found id: ""
	I0910 19:03:50.628747   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.628767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:50.628774   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:50.628833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:50.660635   72122 cri.go:89] found id: ""
	I0910 19:03:50.660655   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.660663   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:50.660671   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:50.660683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:50.716517   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:50.716544   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:50.731411   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:50.731443   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:50.799252   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:50.799275   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:50.799290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:50.874490   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:50.874524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.222989   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225335   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225365   71627 pod_ready.go:82] duration metric: took 4m0.007907353s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:55.225523   71627 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:03:55.225534   71627 pod_ready.go:39] duration metric: took 4m2.40870138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:55.225551   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:03:55.225579   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:55.225629   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:55.270742   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:55.270761   71627 cri.go:89] found id: ""
	I0910 19:03:55.270768   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:55.270811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.276233   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:55.276283   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:55.316033   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:55.316051   71627 cri.go:89] found id: ""
	I0910 19:03:55.316058   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:55.316103   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.320441   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:55.320494   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:55.354406   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.354428   71627 cri.go:89] found id: ""
	I0910 19:03:55.354435   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:55.354482   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.358553   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:55.358621   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:55.393871   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.393896   71627 cri.go:89] found id: ""
	I0910 19:03:55.393904   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:55.393959   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.398102   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:55.398154   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:55.432605   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.432625   71627 cri.go:89] found id: ""
	I0910 19:03:55.432632   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:55.432686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.437631   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:55.437689   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:55.474250   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.474277   71627 cri.go:89] found id: ""
	I0910 19:03:55.474287   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:55.474352   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.479177   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:55.479235   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:55.514918   71627 cri.go:89] found id: ""
	I0910 19:03:55.514942   71627 logs.go:276] 0 containers: []
	W0910 19:03:55.514951   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:55.514956   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:55.515014   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:55.549310   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.549330   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.549335   71627 cri.go:89] found id: ""
	I0910 19:03:55.549347   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:55.549404   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.553420   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.557502   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:55.557531   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.592661   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:55.592685   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.629876   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:55.629908   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.668935   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:55.668963   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:55.685881   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:55.685906   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:55.815552   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:55.815578   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.854615   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:55.854640   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.906027   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:55.906069   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.943771   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:55.943808   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:52.666368   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.165213   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:53.417835   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:53.430627   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:53.430694   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:53.469953   72122 cri.go:89] found id: ""
	I0910 19:03:53.469981   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.469992   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:53.469999   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:53.470060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:53.503712   72122 cri.go:89] found id: ""
	I0910 19:03:53.503739   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.503750   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:53.503757   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:53.503814   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:53.539875   72122 cri.go:89] found id: ""
	I0910 19:03:53.539895   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.539902   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:53.539907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:53.539952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:53.575040   72122 cri.go:89] found id: ""
	I0910 19:03:53.575067   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.575078   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:53.575085   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:53.575159   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:53.611171   72122 cri.go:89] found id: ""
	I0910 19:03:53.611193   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.611201   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:53.611206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:53.611253   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:53.644467   72122 cri.go:89] found id: ""
	I0910 19:03:53.644494   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.644505   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:53.644513   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:53.644575   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:53.680886   72122 cri.go:89] found id: ""
	I0910 19:03:53.680913   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.680924   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:53.680931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:53.680989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:53.716834   72122 cri.go:89] found id: ""
	I0910 19:03:53.716863   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.716875   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:53.716885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:53.716900   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.755544   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:53.755568   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:53.807382   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:53.807411   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:53.820289   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:53.820311   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:53.891500   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:53.891524   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:53.891540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:56.472368   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:56.491939   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:56.492020   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:56.535575   72122 cri.go:89] found id: ""
	I0910 19:03:56.535603   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.535614   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:56.535620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:56.535672   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:56.570366   72122 cri.go:89] found id: ""
	I0910 19:03:56.570390   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.570398   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:56.570403   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:56.570452   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:56.609486   72122 cri.go:89] found id: ""
	I0910 19:03:56.609524   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.609535   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:56.609542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:56.609608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:56.650268   72122 cri.go:89] found id: ""
	I0910 19:03:56.650295   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.650305   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:56.650312   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:56.650371   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:56.689113   72122 cri.go:89] found id: ""
	I0910 19:03:56.689139   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.689146   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:56.689154   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:56.689214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:56.721546   72122 cri.go:89] found id: ""
	I0910 19:03:56.721568   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.721576   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:56.721582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:56.721639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:56.753149   72122 cri.go:89] found id: ""
	I0910 19:03:56.753171   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.753179   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:56.753185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:56.753233   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:56.786624   72122 cri.go:89] found id: ""
	I0910 19:03:56.786648   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.786658   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:56.786669   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.786683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.840243   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:56.840276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:56.854453   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:56.854475   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:56.928814   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:56.928835   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:56.928849   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:57.012360   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:57.012403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.443638   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:03:56.443684   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.498856   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.498897   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.573520   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:56.573548   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:56.621270   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:56.621301   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.173747   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.190441   71627 api_server.go:72] duration metric: took 4m14.110101643s to wait for apiserver process to appear ...
	I0910 19:03:59.190463   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:03:59.190495   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.190539   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.224716   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.224744   71627 cri.go:89] found id: ""
	I0910 19:03:59.224753   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:59.224811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.229345   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.229412   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.263589   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.263622   71627 cri.go:89] found id: ""
	I0910 19:03:59.263630   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:59.263686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.269664   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.269728   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.312201   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.312224   71627 cri.go:89] found id: ""
	I0910 19:03:59.312233   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:59.312288   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.317991   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.318067   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.360625   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.360650   71627 cri.go:89] found id: ""
	I0910 19:03:59.360657   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:59.360707   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.364948   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.365010   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.404075   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.404096   71627 cri.go:89] found id: ""
	I0910 19:03:59.404103   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:59.404149   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.408098   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.408141   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.443767   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.443792   71627 cri.go:89] found id: ""
	I0910 19:03:59.443802   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:59.443858   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.448348   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.448397   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.485373   71627 cri.go:89] found id: ""
	I0910 19:03:59.485401   71627 logs.go:276] 0 containers: []
	W0910 19:03:59.485409   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.485414   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:59.485470   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:59.522641   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.522660   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.522664   71627 cri.go:89] found id: ""
	I0910 19:03:59.522671   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:59.522726   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.527283   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.531256   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:59.531275   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.576358   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:59.576382   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.625938   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:59.625974   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.664362   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:59.664386   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.718655   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:59.718686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.763954   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.763984   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.785217   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:59.785248   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.836560   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:59.836604   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.878973   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:59.879001   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.929851   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:59.929878   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.400346   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.400384   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:00.442281   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:00.442307   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:00.510448   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:00.510480   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:57.665980   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.666054   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:01.668052   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.558561   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.572993   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.573094   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.618957   72122 cri.go:89] found id: ""
	I0910 19:03:59.618988   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.618999   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:59.619008   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.619072   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.662544   72122 cri.go:89] found id: ""
	I0910 19:03:59.662643   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.662661   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:59.662673   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.662750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.704323   72122 cri.go:89] found id: ""
	I0910 19:03:59.704349   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.704360   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:59.704367   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.704426   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.738275   72122 cri.go:89] found id: ""
	I0910 19:03:59.738301   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.738311   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:59.738317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.738367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.778887   72122 cri.go:89] found id: ""
	I0910 19:03:59.778922   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.778934   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:59.778944   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.779010   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.814953   72122 cri.go:89] found id: ""
	I0910 19:03:59.814985   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.814995   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:59.815003   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.815064   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.850016   72122 cri.go:89] found id: ""
	I0910 19:03:59.850048   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.850061   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.850069   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:59.850131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:59.887546   72122 cri.go:89] found id: ""
	I0910 19:03:59.887589   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.887600   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:59.887613   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:59.887632   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:59.938761   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.938784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.954572   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:59.954603   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:04:00.029593   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:04:00.029622   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:00.029638   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.121427   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.121462   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:02.660924   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:02.674661   72122 kubeadm.go:597] duration metric: took 4m3.166175956s to restartPrimaryControlPlane
	W0910 19:04:02.674744   72122 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:04:02.674769   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:04:03.133507   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:03.150426   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:03.161678   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:03.173362   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:03.173389   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:03.173436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:03.183872   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:03.183934   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:03.193891   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:03.203385   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:03.203450   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:03.216255   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.227938   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:03.228001   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.240799   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:03.252871   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:03.252922   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:03.263682   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:03.337478   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:04:03.337564   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:03.506276   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:03.506454   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:03.506587   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:04:03.697062   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:03.698908   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:03.699004   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:03.699083   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:03.699184   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:03.699270   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:03.699361   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:03.699517   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:03.700040   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:03.700773   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:03.701529   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:03.702334   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:03.702627   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:03.702715   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:03.929760   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:03.992724   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:04.087552   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:04.226550   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:04.244695   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:04.246125   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:04.246187   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:04.396099   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:03.107779   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 19:04:03.112394   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 19:04:03.113347   71627 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:03.113367   71627 api_server.go:131] duration metric: took 3.922898577s to wait for apiserver health ...
	I0910 19:04:03.113375   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:03.113399   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:03.113443   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:03.153182   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.153204   71627 cri.go:89] found id: ""
	I0910 19:04:03.153213   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:04:03.153263   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.157842   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:03.157906   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:03.199572   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:03.199594   71627 cri.go:89] found id: ""
	I0910 19:04:03.199604   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:04:03.199658   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.204332   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:03.204409   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:03.252660   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.252686   71627 cri.go:89] found id: ""
	I0910 19:04:03.252696   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:04:03.252751   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.257850   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:03.257915   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:03.300208   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:03.300226   71627 cri.go:89] found id: ""
	I0910 19:04:03.300235   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:04:03.300294   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.304875   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:03.304953   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:03.346705   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.346734   71627 cri.go:89] found id: ""
	I0910 19:04:03.346744   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:04:03.346807   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.351246   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:03.351314   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:03.391218   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.391240   71627 cri.go:89] found id: ""
	I0910 19:04:03.391247   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:04:03.391290   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.396156   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:03.396264   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:03.437436   71627 cri.go:89] found id: ""
	I0910 19:04:03.437464   71627 logs.go:276] 0 containers: []
	W0910 19:04:03.437473   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:03.437479   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:03.437551   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:03.476396   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.476417   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.476420   71627 cri.go:89] found id: ""
	I0910 19:04:03.476427   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:04:03.476481   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.480969   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.485821   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:03.485843   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:03.537042   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:04:03.537079   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.599059   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:04:03.599102   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.637541   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:04:03.637576   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.682203   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:04:03.682234   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.734965   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:04:03.734992   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.769711   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:04:03.769738   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.805970   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:03.805999   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:04.165756   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:04.165796   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:04.254572   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:04.254609   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:04.272637   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:04.272686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:04.421716   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:04:04.421756   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:04.476657   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:04:04.476701   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:07.038592   71627 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:07.038618   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.038624   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.038628   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.038632   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.038636   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.038639   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.038644   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.038651   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.038658   71627 system_pods.go:74] duration metric: took 3.925277367s to wait for pod list to return data ...
	I0910 19:04:07.038667   71627 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:07.040831   71627 default_sa.go:45] found service account: "default"
	I0910 19:04:07.040854   71627 default_sa.go:55] duration metric: took 2.180848ms for default service account to be created ...
	I0910 19:04:07.040864   71627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:07.045130   71627 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:07.045151   71627 system_pods.go:89] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.045157   71627 system_pods.go:89] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.045162   71627 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.045167   71627 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.045171   71627 system_pods.go:89] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.045175   71627 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.045180   71627 system_pods.go:89] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.045184   71627 system_pods.go:89] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.045191   71627 system_pods.go:126] duration metric: took 4.321406ms to wait for k8s-apps to be running ...
	I0910 19:04:07.045200   71627 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:07.045242   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:07.061292   71627 system_svc.go:56] duration metric: took 16.084643ms WaitForService to wait for kubelet
	I0910 19:04:07.061318   71627 kubeadm.go:582] duration metric: took 4m21.980981405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:07.061342   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:07.064260   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:07.064277   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:07.064288   71627 node_conditions.go:105] duration metric: took 2.940712ms to run NodePressure ...
	I0910 19:04:07.064298   71627 start.go:241] waiting for startup goroutines ...
	I0910 19:04:07.064308   71627 start.go:246] waiting for cluster config update ...
	I0910 19:04:07.064318   71627 start.go:255] writing updated cluster config ...
	I0910 19:04:07.064627   71627 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:07.109814   71627 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:07.111804   71627 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-557504" cluster and "default" namespace by default
	I0910 19:04:04.165083   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:06.663618   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:04.397627   72122 out.go:235]   - Booting up control plane ...
	I0910 19:04:04.397763   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:04.405199   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:04.407281   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:04.408182   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:04.411438   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:04:08.667046   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:11.164622   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.461731   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.433698154s)
	I0910 19:04:15.461801   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:15.483515   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:15.497133   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:15.513903   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:15.513924   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:15.513972   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:15.524468   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:15.524529   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:15.534726   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:15.544892   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:15.544944   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:15.554663   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.564884   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:15.564978   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.574280   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:15.583882   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:15.583932   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:15.593971   71529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:15.639220   71529 kubeadm.go:310] W0910 19:04:15.612221    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.641412   71529 kubeadm.go:310] W0910 19:04:15.614470    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.749471   71529 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:04:13.164865   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.165232   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:17.664384   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:19.664943   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:22.166309   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:24.300945   71529 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 19:04:24.301016   71529 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:24.301143   71529 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:24.301274   71529 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:24.301408   71529 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 19:04:24.301517   71529 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:24.302988   71529 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:24.303079   71529 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:24.303132   71529 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:24.303197   71529 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:24.303252   71529 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:24.303315   71529 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:24.303367   71529 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:24.303443   71529 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:24.303517   71529 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:24.303631   71529 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:24.303737   71529 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:24.303792   71529 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:24.303873   71529 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:24.303954   71529 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:24.304037   71529 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:04:24.304120   71529 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:24.304217   71529 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:24.304299   71529 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:24.304423   71529 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:24.304523   71529 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:24.305839   71529 out.go:235]   - Booting up control plane ...
	I0910 19:04:24.305946   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:24.306046   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:24.306123   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:24.306254   71529 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:24.306338   71529 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:24.306387   71529 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:24.306507   71529 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 19:04:24.306608   71529 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:04:24.306679   71529 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.526264ms
	I0910 19:04:24.306748   71529 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 19:04:24.306801   71529 kubeadm.go:310] [api-check] The API server is healthy after 5.501960865s
	I0910 19:04:24.306887   71529 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 19:04:24.306997   71529 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 19:04:24.307045   71529 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 19:04:24.307202   71529 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-347802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 19:04:24.307250   71529 kubeadm.go:310] [bootstrap-token] Using token: 3uw8fx.h3bliquui6tuj5mh
	I0910 19:04:24.308589   71529 out.go:235]   - Configuring RBAC rules ...
	I0910 19:04:24.308728   71529 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 19:04:24.308847   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 19:04:24.309021   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 19:04:24.309197   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 19:04:24.309330   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 19:04:24.309437   71529 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 19:04:24.309612   71529 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 19:04:24.309681   71529 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 19:04:24.309776   71529 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 19:04:24.309787   71529 kubeadm.go:310] 
	I0910 19:04:24.309865   71529 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 19:04:24.309874   71529 kubeadm.go:310] 
	I0910 19:04:24.309951   71529 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 19:04:24.309963   71529 kubeadm.go:310] 
	I0910 19:04:24.309984   71529 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 19:04:24.310033   71529 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 19:04:24.310085   71529 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 19:04:24.310091   71529 kubeadm.go:310] 
	I0910 19:04:24.310152   71529 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 19:04:24.310164   71529 kubeadm.go:310] 
	I0910 19:04:24.310203   71529 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 19:04:24.310214   71529 kubeadm.go:310] 
	I0910 19:04:24.310262   71529 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 19:04:24.310326   71529 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 19:04:24.310383   71529 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 19:04:24.310390   71529 kubeadm.go:310] 
	I0910 19:04:24.310457   71529 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 19:04:24.310525   71529 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 19:04:24.310531   71529 kubeadm.go:310] 
	I0910 19:04:24.310598   71529 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310705   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 19:04:24.310728   71529 kubeadm.go:310] 	--control-plane 
	I0910 19:04:24.310731   71529 kubeadm.go:310] 
	I0910 19:04:24.310806   71529 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 19:04:24.310814   71529 kubeadm.go:310] 
	I0910 19:04:24.310884   71529 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310978   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 19:04:24.310994   71529 cni.go:84] Creating CNI manager for ""
	I0910 19:04:24.311006   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:04:24.312411   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:04:24.313516   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:04:24.326066   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:04:24.346367   71529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:04:24.346446   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.346475   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-347802 minikube.k8s.io/updated_at=2024_09_10T19_04_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=no-preload-347802 minikube.k8s.io/primary=true
	I0910 19:04:24.374396   71529 ops.go:34] apiserver oom_adj: -16
	I0910 19:04:24.561164   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.061938   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.561435   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.062175   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.561899   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:27.061256   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.664345   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:26.666316   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:27.561862   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.061889   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.562200   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.732352   71529 kubeadm.go:1113] duration metric: took 4.385961888s to wait for elevateKubeSystemPrivileges
	I0910 19:04:28.732387   71529 kubeadm.go:394] duration metric: took 5m2.035769941s to StartCluster
	I0910 19:04:28.732410   71529 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.732497   71529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:04:28.735625   71529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.735909   71529 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:04:28.736234   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:04:28.736296   71529 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:04:28.736417   71529 addons.go:69] Setting storage-provisioner=true in profile "no-preload-347802"
	I0910 19:04:28.736445   71529 addons.go:234] Setting addon storage-provisioner=true in "no-preload-347802"
	W0910 19:04:28.736453   71529 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:04:28.736480   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.736667   71529 addons.go:69] Setting default-storageclass=true in profile "no-preload-347802"
	I0910 19:04:28.736674   71529 addons.go:69] Setting metrics-server=true in profile "no-preload-347802"
	I0910 19:04:28.736703   71529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-347802"
	I0910 19:04:28.736717   71529 addons.go:234] Setting addon metrics-server=true in "no-preload-347802"
	W0910 19:04:28.736727   71529 addons.go:243] addon metrics-server should already be in state true
	I0910 19:04:28.736758   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.737346   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737360   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737401   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737709   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737809   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737832   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737891   71529 out.go:177] * Verifying Kubernetes components...
	I0910 19:04:28.739122   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:04:28.755720   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0910 19:04:28.755754   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0910 19:04:28.756110   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0910 19:04:28.756297   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756298   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756688   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756870   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.756891   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757053   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757092   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757426   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757451   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.757637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.757759   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.758328   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.758368   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.760809   71529 addons.go:234] Setting addon default-storageclass=true in "no-preload-347802"
	W0910 19:04:28.760825   71529 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:04:28.760848   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.761254   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.761285   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.761486   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.761994   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.762024   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.775766   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0910 19:04:28.776199   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.776801   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.776824   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.777167   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.777359   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.777651   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0910 19:04:28.778091   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.778678   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.778696   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.779019   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.779215   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.779616   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.780231   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0910 19:04:28.780605   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.780675   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.781156   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.781183   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.781330   71529 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:04:28.781416   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.781810   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.781841   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.782326   71529 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:04:28.782391   71529 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:28.782408   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:04:28.782425   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.783393   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:04:28.783413   71529 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:04:28.783433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.785287   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785763   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.785792   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785948   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.786114   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.786250   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.786397   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.786768   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787101   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.787124   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787330   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.787492   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.787637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.787747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.802599   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0910 19:04:28.802947   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.803402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.803415   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.803711   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.803882   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.805296   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.805498   71529 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:28.805510   71529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:04:28.805523   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.808615   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809041   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.809056   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809333   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.809518   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.809687   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.809792   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.974399   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:04:29.068531   71529 node_ready.go:35] waiting up to 6m0s for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084281   71529 node_ready.go:49] node "no-preload-347802" has status "Ready":"True"
	I0910 19:04:29.084306   71529 node_ready.go:38] duration metric: took 15.737646ms for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084317   71529 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:29.098794   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:29.122272   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:29.132813   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:29.191758   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:04:29.191777   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:04:29.224998   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:04:29.225019   71529 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:04:29.264455   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:29.264489   71529 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:04:29.369504   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:30.199702   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.066859027s)
	I0910 19:04:30.199757   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199769   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.199850   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077541595s)
	I0910 19:04:30.199895   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199909   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200096   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200135   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200147   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200155   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200154   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200174   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200187   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200201   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200209   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200220   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200387   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200402   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200617   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200655   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200680   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.219416   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.219437   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.219697   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.219705   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.219741   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.366927   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.366957   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367264   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367279   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367288   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.367302   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367506   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367520   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367533   71529 addons.go:475] Verifying addon metrics-server=true in "no-preload-347802"
	I0910 19:04:30.369968   71529 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:04:30.371186   71529 addons.go:510] duration metric: took 1.634894777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:04:31.104506   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:29.164993   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:31.668683   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:33.105761   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:35.606200   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:34.164783   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:36.663840   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:38.106188   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:39.106175   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.106199   71529 pod_ready.go:82] duration metric: took 10.007378894s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.106210   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111333   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.111352   71529 pod_ready.go:82] duration metric: took 5.13344ms for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111362   71529 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116673   71529 pod_ready.go:93] pod "etcd-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.116689   71529 pod_ready.go:82] duration metric: took 5.319986ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116697   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125400   71529 pod_ready.go:93] pod "kube-apiserver-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.125422   71529 pod_ready.go:82] duration metric: took 8.717835ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125433   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133790   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.133807   71529 pod_ready.go:82] duration metric: took 8.36626ms for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133818   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504642   71529 pod_ready.go:93] pod "kube-proxy-gwzhs" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.504665   71529 pod_ready.go:82] duration metric: took 370.840119ms for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504675   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903625   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.903646   71529 pod_ready.go:82] duration metric: took 398.964651ms for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903653   71529 pod_ready.go:39] duration metric: took 10.819325885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:39.903666   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:39.903710   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:39.918479   71529 api_server.go:72] duration metric: took 11.182520681s to wait for apiserver process to appear ...
	I0910 19:04:39.918501   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:39.918521   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 19:04:39.922745   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 19:04:39.923681   71529 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:39.923701   71529 api_server.go:131] duration metric: took 5.193102ms to wait for apiserver health ...
	I0910 19:04:39.923710   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:40.106587   71529 system_pods.go:59] 9 kube-system pods found
	I0910 19:04:40.106614   71529 system_pods.go:61] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.106619   71529 system_pods.go:61] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.106623   71529 system_pods.go:61] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.106626   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.106630   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.106633   71529 system_pods.go:61] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.106637   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.106642   71529 system_pods.go:61] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.106646   71529 system_pods.go:61] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.106652   71529 system_pods.go:74] duration metric: took 182.93737ms to wait for pod list to return data ...
	I0910 19:04:40.106662   71529 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:40.303294   71529 default_sa.go:45] found service account: "default"
	I0910 19:04:40.303316   71529 default_sa.go:55] duration metric: took 196.649242ms for default service account to be created ...
	I0910 19:04:40.303324   71529 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:40.506862   71529 system_pods.go:86] 9 kube-system pods found
	I0910 19:04:40.506894   71529 system_pods.go:89] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.506902   71529 system_pods.go:89] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.506908   71529 system_pods.go:89] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.506913   71529 system_pods.go:89] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.506919   71529 system_pods.go:89] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.506925   71529 system_pods.go:89] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.506931   71529 system_pods.go:89] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.506940   71529 system_pods.go:89] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.506949   71529 system_pods.go:89] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.506963   71529 system_pods.go:126] duration metric: took 203.633111ms to wait for k8s-apps to be running ...
	I0910 19:04:40.506974   71529 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:40.507032   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:40.522711   71529 system_svc.go:56] duration metric: took 15.728044ms WaitForService to wait for kubelet
	I0910 19:04:40.522739   71529 kubeadm.go:582] duration metric: took 11.786784927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:40.522761   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:40.702993   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:40.703011   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:40.703020   71529 node_conditions.go:105] duration metric: took 180.253729ms to run NodePressure ...
	I0910 19:04:40.703031   71529 start.go:241] waiting for startup goroutines ...
	I0910 19:04:40.703037   71529 start.go:246] waiting for cluster config update ...
	I0910 19:04:40.703046   71529 start.go:255] writing updated cluster config ...
	I0910 19:04:40.703329   71529 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:40.750434   71529 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:40.752453   71529 out.go:177] * Done! kubectl is now configured to use "no-preload-347802" cluster and "default" namespace by default
	I0910 19:04:37.670616   71183 pod_ready.go:82] duration metric: took 4m0.012645309s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:04:37.670637   71183 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:04:37.670644   71183 pod_ready.go:39] duration metric: took 4m3.614436373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:37.670658   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:37.670693   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:37.670746   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:37.721269   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:37.721295   71183 cri.go:89] found id: ""
	I0910 19:04:37.721303   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:37.721361   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.725648   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:37.725711   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:37.760937   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:37.760967   71183 cri.go:89] found id: ""
	I0910 19:04:37.760978   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:37.761034   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.765181   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:37.765243   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:37.800419   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:37.800447   71183 cri.go:89] found id: ""
	I0910 19:04:37.800457   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:37.800509   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.805255   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:37.805330   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:37.849032   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:37.849055   71183 cri.go:89] found id: ""
	I0910 19:04:37.849064   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:37.849136   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.853148   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:37.853224   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:37.888327   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:37.888352   71183 cri.go:89] found id: ""
	I0910 19:04:37.888361   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:37.888417   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.892721   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:37.892782   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:37.928648   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:37.928671   71183 cri.go:89] found id: ""
	I0910 19:04:37.928679   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:37.928731   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.932746   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:37.932804   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:37.967343   71183 cri.go:89] found id: ""
	I0910 19:04:37.967372   71183 logs.go:276] 0 containers: []
	W0910 19:04:37.967382   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:37.967387   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:37.967435   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:38.004150   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:38.004173   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:38.004176   71183 cri.go:89] found id: ""
	I0910 19:04:38.004183   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:38.004227   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.008118   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.011779   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:38.011799   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:38.026386   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:38.026405   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:38.149296   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:38.149324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:38.200987   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:38.201019   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:38.243953   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:38.243983   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:38.287242   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:38.287272   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:38.329165   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:38.329193   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:38.391117   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:38.391144   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:38.464906   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:38.464944   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:38.979681   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:38.979732   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:39.015604   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:39.015636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:39.055715   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:39.055748   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:39.103920   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:39.103952   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.650354   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:41.667568   71183 api_server.go:72] duration metric: took 4m15.330735169s to wait for apiserver process to appear ...
	I0910 19:04:41.667604   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:41.667636   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:41.667682   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:41.707476   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:41.707507   71183 cri.go:89] found id: ""
	I0910 19:04:41.707520   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:41.707590   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.711732   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:41.711794   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:41.745943   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:41.745963   71183 cri.go:89] found id: ""
	I0910 19:04:41.745972   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:41.746023   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.749930   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:41.749978   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:41.790296   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:41.790318   71183 cri.go:89] found id: ""
	I0910 19:04:41.790327   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:41.790388   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.794933   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:41.794988   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:41.840669   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:41.840695   71183 cri.go:89] found id: ""
	I0910 19:04:41.840704   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:41.840762   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.845674   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:41.845729   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:41.891686   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.891708   71183 cri.go:89] found id: ""
	I0910 19:04:41.891717   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:41.891774   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.896435   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:41.896486   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:41.935802   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:41.935829   71183 cri.go:89] found id: ""
	I0910 19:04:41.935838   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:41.935882   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.940924   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:41.940979   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:41.980326   71183 cri.go:89] found id: ""
	I0910 19:04:41.980349   71183 logs.go:276] 0 containers: []
	W0910 19:04:41.980357   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:41.980362   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:41.980409   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:42.021683   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.021701   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.021704   71183 cri.go:89] found id: ""
	I0910 19:04:42.021711   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:42.021760   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.025986   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.029896   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:42.029919   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:42.101147   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:42.101182   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:42.115299   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:42.115324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:42.230472   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:42.230503   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:42.285314   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:42.285341   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:42.338243   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:42.338283   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:42.380609   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:42.380636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:42.424255   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:42.424290   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:42.481943   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:42.481972   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:42.525590   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:42.525613   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.566519   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:42.566546   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.601221   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:42.601256   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:43.021780   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:43.021816   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:45.569149   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:04:45.575146   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:04:45.576058   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:45.576077   71183 api_server.go:131] duration metric: took 3.908465286s to wait for apiserver health ...
	I0910 19:04:45.576088   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:45.576112   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:45.576159   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:45.631224   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:45.631246   71183 cri.go:89] found id: ""
	I0910 19:04:45.631254   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:45.631310   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.636343   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:45.636408   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:45.675538   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:45.675558   71183 cri.go:89] found id: ""
	I0910 19:04:45.675565   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:45.675620   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.679865   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:45.679921   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:45.724808   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:45.724835   71183 cri.go:89] found id: ""
	I0910 19:04:45.724844   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:45.724898   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.729083   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:45.729141   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:45.762943   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:45.762965   71183 cri.go:89] found id: ""
	I0910 19:04:45.762973   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:45.763022   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.766889   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:45.766935   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:45.802849   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:45.802875   71183 cri.go:89] found id: ""
	I0910 19:04:45.802883   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:45.802924   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.806796   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:45.806860   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:45.841656   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:45.841675   71183 cri.go:89] found id: ""
	I0910 19:04:45.841682   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:45.841722   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.846078   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:45.846145   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:45.883750   71183 cri.go:89] found id: ""
	I0910 19:04:45.883773   71183 logs.go:276] 0 containers: []
	W0910 19:04:45.883787   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:45.883795   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:45.883857   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:45.918786   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:45.918815   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.918822   71183 cri.go:89] found id: ""
	I0910 19:04:45.918829   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:45.918876   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.923329   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.927395   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:45.927417   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.963527   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:45.963557   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:46.364843   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:46.364886   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:46.379339   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:46.379366   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:46.483159   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:46.483190   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:46.523850   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:46.523877   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:46.574864   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:46.574905   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:46.613765   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:46.613793   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:46.659791   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:46.659819   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:46.722103   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:46.722138   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:46.794098   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:46.794140   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:46.850112   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:46.850148   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:46.899733   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:46.899770   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:44.413134   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:04:44.413215   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:44.413400   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:49.448164   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:49.448194   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.448201   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.448207   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.448216   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.448220   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.448225   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.448232   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.448239   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.448248   71183 system_pods.go:74] duration metric: took 3.872154051s to wait for pod list to return data ...
	I0910 19:04:49.448255   71183 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:49.450795   71183 default_sa.go:45] found service account: "default"
	I0910 19:04:49.450816   71183 default_sa.go:55] duration metric: took 2.553358ms for default service account to be created ...
	I0910 19:04:49.450826   71183 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:49.454993   71183 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:49.455015   71183 system_pods.go:89] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.455020   71183 system_pods.go:89] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.455024   71183 system_pods.go:89] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.455030   71183 system_pods.go:89] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.455033   71183 system_pods.go:89] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.455038   71183 system_pods.go:89] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.455047   71183 system_pods.go:89] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.455053   71183 system_pods.go:89] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.455062   71183 system_pods.go:126] duration metric: took 4.230457ms to wait for k8s-apps to be running ...
	I0910 19:04:49.455073   71183 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:49.455130   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:49.471265   71183 system_svc.go:56] duration metric: took 16.184718ms WaitForService to wait for kubelet
	I0910 19:04:49.471293   71183 kubeadm.go:582] duration metric: took 4m23.134472506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:49.471320   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:49.475529   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:49.475548   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:49.475558   71183 node_conditions.go:105] duration metric: took 4.228611ms to run NodePressure ...
	I0910 19:04:49.475567   71183 start.go:241] waiting for startup goroutines ...
	I0910 19:04:49.475577   71183 start.go:246] waiting for cluster config update ...
	I0910 19:04:49.475589   71183 start.go:255] writing updated cluster config ...
	I0910 19:04:49.475827   71183 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:49.522354   71183 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:49.524738   71183 out.go:177] * Done! kubectl is now configured to use "embed-certs-836868" cluster and "default" namespace by default
	I0910 19:04:49.413796   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:49.413967   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:59.414341   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:59.414514   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:19.415680   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:19.415950   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.417770   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:59.418015   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.418035   72122 kubeadm.go:310] 
	I0910 19:05:59.418101   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:05:59.418137   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:05:59.418143   72122 kubeadm.go:310] 
	I0910 19:05:59.418178   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:05:59.418207   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:05:59.418313   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:05:59.418321   72122 kubeadm.go:310] 
	I0910 19:05:59.418443   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:05:59.418477   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:05:59.418519   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:05:59.418527   72122 kubeadm.go:310] 
	I0910 19:05:59.418625   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:05:59.418723   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:05:59.418731   72122 kubeadm.go:310] 
	I0910 19:05:59.418869   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:05:59.418976   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:05:59.419045   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:05:59.419141   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:05:59.419152   72122 kubeadm.go:310] 
	I0910 19:05:59.420015   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:05:59.420093   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:05:59.420165   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0910 19:05:59.420289   72122 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0910 19:05:59.420339   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:06:04.848652   72122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.428289133s)
	I0910 19:06:04.848719   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:06:04.862914   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:06:04.872903   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:06:04.872920   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:06:04.872960   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:06:04.882109   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:06:04.882168   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:06:04.890962   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:06:04.899925   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:06:04.899985   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:06:04.908796   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.917123   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:06:04.917173   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.925821   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:06:04.937885   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:06:04.937963   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:06:04.948108   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:06:05.019246   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:06:05.019321   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:06:05.162639   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:06:05.162770   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:06:05.162918   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:06:05.343270   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:06:05.345092   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:06:05.345189   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:06:05.345299   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:06:05.345417   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:06:05.345497   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:06:05.345606   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:06:05.345718   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:06:05.345981   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:06:05.346367   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:06:05.346822   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:06:05.347133   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:06:05.347246   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:06:05.347346   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:06:05.536681   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:06:05.773929   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:06:05.994857   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:06:06.139145   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:06:06.154510   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:06:06.155479   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:06:06.155548   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:06:06.311520   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:06:06.314167   72122 out.go:235]   - Booting up control plane ...
	I0910 19:06:06.314311   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:06:06.320856   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:06:06.321801   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:06:06.322508   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:06:06.324744   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:06:46.327168   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:06:46.327286   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:46.327534   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:06:51.328423   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:51.328643   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:01.329028   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:01.329315   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:21.329371   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:21.329627   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328238   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:08:01.328535   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328566   72122 kubeadm.go:310] 
	I0910 19:08:01.328625   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:08:01.328688   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:08:01.328701   72122 kubeadm.go:310] 
	I0910 19:08:01.328749   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:08:01.328797   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:08:01.328941   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:08:01.328953   72122 kubeadm.go:310] 
	I0910 19:08:01.329068   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:08:01.329136   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:08:01.329177   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:08:01.329191   72122 kubeadm.go:310] 
	I0910 19:08:01.329310   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:08:01.329377   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:08:01.329383   72122 kubeadm.go:310] 
	I0910 19:08:01.329468   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:08:01.329539   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:08:01.329607   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:08:01.329667   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:08:01.329674   72122 kubeadm.go:310] 
	I0910 19:08:01.330783   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:08:01.330892   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:08:01.330963   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 19:08:01.331020   72122 kubeadm.go:394] duration metric: took 8m1.874926868s to StartCluster
	I0910 19:08:01.331061   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:08:01.331117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:08:01.385468   72122 cri.go:89] found id: ""
	I0910 19:08:01.385492   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.385499   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:08:01.385505   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:08:01.385571   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:08:01.424028   72122 cri.go:89] found id: ""
	I0910 19:08:01.424051   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.424060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:08:01.424064   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:08:01.424121   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:08:01.462946   72122 cri.go:89] found id: ""
	I0910 19:08:01.462973   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.462983   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:08:01.462991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:08:01.463045   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:08:01.498242   72122 cri.go:89] found id: ""
	I0910 19:08:01.498269   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.498278   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:08:01.498283   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:08:01.498329   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:08:01.532917   72122 cri.go:89] found id: ""
	I0910 19:08:01.532946   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.532953   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:08:01.532959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:08:01.533011   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:08:01.567935   72122 cri.go:89] found id: ""
	I0910 19:08:01.567959   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.567967   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:08:01.567973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:08:01.568027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:08:01.601393   72122 cri.go:89] found id: ""
	I0910 19:08:01.601418   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.601426   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:08:01.601432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:08:01.601489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:08:01.639307   72122 cri.go:89] found id: ""
	I0910 19:08:01.639335   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.639345   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:08:01.639358   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:08:01.639373   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:08:01.726566   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:08:01.726591   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:08:01.726614   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:08:01.839965   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:08:01.840004   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:08:01.879658   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:08:01.879687   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:08:01.939066   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:08:01.939102   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0910 19:08:01.955390   72122 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0910 19:08:01.955436   72122 out.go:270] * 
	W0910 19:08:01.955500   72122 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.955524   72122 out.go:270] * 
	W0910 19:08:01.956343   72122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 19:08:01.959608   72122 out.go:201] 
	W0910 19:08:01.960877   72122 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.960929   72122 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0910 19:08:01.960957   72122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0910 19:08:01.962345   72122 out.go:201] 
	
	
	==> CRI-O <==
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.431686690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995631431658259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=225f78e0-7dec-496b-a98b-6fdf2a4835d1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.432277811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ba764ee-16b2-4283-be2f-34350ac52e2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.432329433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ba764ee-16b2-4283-be2f-34350ac52e2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.432573429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994854862538360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd1bbbab084640d79766d7d14d3cdc5c66bd653aaae7d35f5cb8135b378c4efc,PodSandboxId:a5aeeb32481e552762401be5447df77c550225026dc65b3b81008bb8152ef1c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994833794038972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313,PodSandboxId:88ef68c9eb85921397b1c48b3c9679d1315503d56a2c0a25898df69bad8097da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994831701705826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mt78p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bbfe99-3c36-4095-b7e8-ee0861f9973f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994824014297214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e,PodSandboxId:3cffcbe8ca573f781fa2a7ad185c1e6cfad19524b6a4216d75c164ad81e43c6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994823988045781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fddv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f0b1df-26eb-4a6c-957d-0b7655309
cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34,PodSandboxId:e22af3fbe04a9ba6fe78408371ec5436af690308aa766830d6b7912bf4cabd5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994820235638728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b8a13748374dd9556b4c03e74bc5d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3,PodSandboxId:03f9007efb7a7151b7ebf90f8a2a207dad361176bd3eb7d25992969c784d8bd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994820247337830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7662deb051c4e63b75dd3b02a637575b,},Annotations:map[string
]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293,PodSandboxId:5f4dee624e476b7a12bc6013ffdeff28c153726fa728c12051654cba7d2235ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994820255289950,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e0693761ce7b6880e7e2b2f5137118,},Annotations:map[string]string{io.k
ubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc,PodSandboxId:cfa9f55fd46f24a04d4dc3a0de977528d3c98e9174f7c8a62322251c33d75c19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994820225407769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e867521b37d3ca565ac0de14a5983,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ba764ee-16b2-4283-be2f-34350ac52e2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.471545148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fda7aea7-4368-47f3-b908-6b11e1870b83 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.471631668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fda7aea7-4368-47f3-b908-6b11e1870b83 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.472889009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6fff963-a6d3-42b6-bb79-30050eef753d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.473330975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995631473307012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6fff963-a6d3-42b6-bb79-30050eef753d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.473990078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f55ef87-443b-4781-83df-e1d682881ca7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.474058334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f55ef87-443b-4781-83df-e1d682881ca7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.474240146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994854862538360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd1bbbab084640d79766d7d14d3cdc5c66bd653aaae7d35f5cb8135b378c4efc,PodSandboxId:a5aeeb32481e552762401be5447df77c550225026dc65b3b81008bb8152ef1c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994833794038972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313,PodSandboxId:88ef68c9eb85921397b1c48b3c9679d1315503d56a2c0a25898df69bad8097da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994831701705826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mt78p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bbfe99-3c36-4095-b7e8-ee0861f9973f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994824014297214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e,PodSandboxId:3cffcbe8ca573f781fa2a7ad185c1e6cfad19524b6a4216d75c164ad81e43c6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994823988045781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fddv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f0b1df-26eb-4a6c-957d-0b7655309
cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34,PodSandboxId:e22af3fbe04a9ba6fe78408371ec5436af690308aa766830d6b7912bf4cabd5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994820235638728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b8a13748374dd9556b4c03e74bc5d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3,PodSandboxId:03f9007efb7a7151b7ebf90f8a2a207dad361176bd3eb7d25992969c784d8bd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994820247337830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7662deb051c4e63b75dd3b02a637575b,},Annotations:map[string
]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293,PodSandboxId:5f4dee624e476b7a12bc6013ffdeff28c153726fa728c12051654cba7d2235ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994820255289950,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e0693761ce7b6880e7e2b2f5137118,},Annotations:map[string]string{io.k
ubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc,PodSandboxId:cfa9f55fd46f24a04d4dc3a0de977528d3c98e9174f7c8a62322251c33d75c19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994820225407769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e867521b37d3ca565ac0de14a5983,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f55ef87-443b-4781-83df-e1d682881ca7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.511230275Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ccc555e-c40f-4047-979c-93cc9cc9bcc7 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.511306572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ccc555e-c40f-4047-979c-93cc9cc9bcc7 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.512776535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27b47af2-bf30-426c-9c0c-094edc0fa3eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.513155180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995631513133144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27b47af2-bf30-426c-9c0c-094edc0fa3eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.513795295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4478eb38-5dc4-4890-9bf8-9605e4b55b0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.513845509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4478eb38-5dc4-4890-9bf8-9605e4b55b0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.514045137Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994854862538360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd1bbbab084640d79766d7d14d3cdc5c66bd653aaae7d35f5cb8135b378c4efc,PodSandboxId:a5aeeb32481e552762401be5447df77c550225026dc65b3b81008bb8152ef1c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994833794038972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313,PodSandboxId:88ef68c9eb85921397b1c48b3c9679d1315503d56a2c0a25898df69bad8097da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994831701705826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mt78p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bbfe99-3c36-4095-b7e8-ee0861f9973f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994824014297214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e,PodSandboxId:3cffcbe8ca573f781fa2a7ad185c1e6cfad19524b6a4216d75c164ad81e43c6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994823988045781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fddv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f0b1df-26eb-4a6c-957d-0b7655309
cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34,PodSandboxId:e22af3fbe04a9ba6fe78408371ec5436af690308aa766830d6b7912bf4cabd5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994820235638728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b8a13748374dd9556b4c03e74bc5d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3,PodSandboxId:03f9007efb7a7151b7ebf90f8a2a207dad361176bd3eb7d25992969c784d8bd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994820247337830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7662deb051c4e63b75dd3b02a637575b,},Annotations:map[string
]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293,PodSandboxId:5f4dee624e476b7a12bc6013ffdeff28c153726fa728c12051654cba7d2235ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994820255289950,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e0693761ce7b6880e7e2b2f5137118,},Annotations:map[string]string{io.k
ubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc,PodSandboxId:cfa9f55fd46f24a04d4dc3a0de977528d3c98e9174f7c8a62322251c33d75c19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994820225407769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e867521b37d3ca565ac0de14a5983,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4478eb38-5dc4-4890-9bf8-9605e4b55b0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.555060364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ff84a2d-4c62-41e9-b437-846f7419783c name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.555158799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ff84a2d-4c62-41e9-b437-846f7419783c name=/runtime.v1.RuntimeService/Version
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.556576111Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=285312ca-30e7-40d8-94f1-408b83f68421 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.556944198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995631556922067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=285312ca-30e7-40d8-94f1-408b83f68421 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.557565991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f8d731c-5174-46b5-8909-2557a3a18a78 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.557655484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f8d731c-5174-46b5-8909-2557a3a18a78 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:13:51 embed-certs-836868 crio[704]: time="2024-09-10 19:13:51.557862562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994854862538360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd1bbbab084640d79766d7d14d3cdc5c66bd653aaae7d35f5cb8135b378c4efc,PodSandboxId:a5aeeb32481e552762401be5447df77c550225026dc65b3b81008bb8152ef1c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994833794038972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313,PodSandboxId:88ef68c9eb85921397b1c48b3c9679d1315503d56a2c0a25898df69bad8097da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994831701705826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mt78p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bbfe99-3c36-4095-b7e8-ee0861f9973f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994824014297214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e,PodSandboxId:3cffcbe8ca573f781fa2a7ad185c1e6cfad19524b6a4216d75c164ad81e43c6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994823988045781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fddv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f0b1df-26eb-4a6c-957d-0b7655309
cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34,PodSandboxId:e22af3fbe04a9ba6fe78408371ec5436af690308aa766830d6b7912bf4cabd5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994820235638728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b8a13748374dd9556b4c03e74bc5d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3,PodSandboxId:03f9007efb7a7151b7ebf90f8a2a207dad361176bd3eb7d25992969c784d8bd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994820247337830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7662deb051c4e63b75dd3b02a637575b,},Annotations:map[string
]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293,PodSandboxId:5f4dee624e476b7a12bc6013ffdeff28c153726fa728c12051654cba7d2235ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994820255289950,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e0693761ce7b6880e7e2b2f5137118,},Annotations:map[string]string{io.k
ubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc,PodSandboxId:cfa9f55fd46f24a04d4dc3a0de977528d3c98e9174f7c8a62322251c33d75c19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994820225407769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e867521b37d3ca565ac0de14a5983,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f8d731c-5174-46b5-8909-2557a3a18a78 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11c23ffac9396       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   241ced956ebcc       storage-provisioner
	fd1bbbab08464       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   a5aeeb32481e5       busybox
	6ba324381f8f8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   88ef68c9eb859       coredns-6f6b679f8f-mt78p
	2986c78197602       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   241ced956ebcc       storage-provisioner
	f113a6d74aef2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   3cffcbe8ca573       kube-proxy-4fddv
	b9ad0bbb3de47       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   5f4dee624e476       kube-apiserver-embed-certs-836868
	2582ec871deb8       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   03f9007efb7a7       kube-controller-manager-embed-certs-836868
	4f0241a4c8a31       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   e22af3fbe04a9       etcd-embed-certs-836868
	6a3fc78649970       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   cfa9f55fd46f2       kube-scheduler-embed-certs-836868
	
	
	==> coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43525 - 21576 "HINFO IN 8786414796633565538.1486483400192273916. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010038973s
	
	
	==> describe nodes <==
	Name:               embed-certs-836868
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-836868
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=embed-certs-836868
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_51_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:51:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-836868
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:13:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:11:06 +0000   Tue, 10 Sep 2024 18:51:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:11:06 +0000   Tue, 10 Sep 2024 18:51:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:11:06 +0000   Tue, 10 Sep 2024 18:51:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:11:06 +0000   Tue, 10 Sep 2024 19:00:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    embed-certs-836868
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fad7b25219ca42019c13ea149c801dc4
	  System UUID:                fad7b252-19ca-4201-9c13-ea149c801dc4
	  Boot ID:                    3e25c5c7-bde2-4e61-a1b9-143b7664c1e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-6f6b679f8f-mt78p                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-embed-certs-836868                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-836868             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-836868    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-4fddv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-embed-certs-836868             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-26knw               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-836868 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-836868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-836868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-836868 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-836868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-836868 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node embed-certs-836868 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-836868 event: Registered Node embed-certs-836868 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-836868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-836868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-836868 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-836868 event: Registered Node embed-certs-836868 in Controller
	
	
	==> dmesg <==
	[Sep10 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053419] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041894] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.146357] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep10 19:00] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.614729] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.955578] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.061186] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055959] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.205677] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.128952] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.285168] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.050039] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +1.990525] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +0.070121] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.518047] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.452033] systemd-fstab-generator[1540]: Ignoring "noauto" option for root device
	[  +3.276561] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.242236] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] <==
	{"level":"info","ts":"2024-09-10T19:00:20.701132Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:00:20.718897Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-10T19:00:20.722724Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ec1614c5c0f7335e","initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T19:00:20.724628Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T19:00:20.720649Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-10T19:00:20.725631Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-10T19:00:22.029943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-10T19:00:22.030005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T19:00:22.030038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2024-09-10T19:00:22.030051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T19:00:22.030057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-09-10T19:00:22.030072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2024-09-10T19:00:22.030080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-09-10T19:00:22.032684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:00:22.032631Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:embed-certs-836868 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T19:00:22.033624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:00:22.033808Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:00:22.034097Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T19:00:22.034144Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T19:00:22.034450Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T19:00:22.034808Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:00:22.035789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.107:2379"}
	{"level":"info","ts":"2024-09-10T19:10:22.060974Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":852}
	{"level":"info","ts":"2024-09-10T19:10:22.070691Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":852,"took":"9.260192ms","hash":2496125384,"current-db-size-bytes":2703360,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2703360,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-10T19:10:22.070746Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2496125384,"revision":852,"compact-revision":-1}
	
	
	==> kernel <==
	 19:13:51 up 13 min,  0 users,  load average: 0.04, 0.08, 0.08
	Linux embed-certs-836868 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0910 19:10:24.317353       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:10:24.317435       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0910 19:10:24.318500       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:10:24.318644       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:11:24.319170       1 handler_proxy.go:99] no RequestInfo found in the context
	W0910 19:11:24.319206       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:11:24.319532       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0910 19:11:24.319418       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0910 19:11:24.320778       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:11:24.320780       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:13:24.322019       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:13:24.322348       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0910 19:13:24.322019       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:13:24.322579       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0910 19:13:24.323666       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:13:24.323721       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] <==
	E0910 19:08:26.895904       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:08:27.462165       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:08:56.902072       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:08:57.470091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:09:26.909308       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:09:27.477856       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:09:56.917557       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:09:57.485460       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:10:26.927228       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:10:27.492378       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:10:56.934656       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:10:57.500384       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:11:06.386166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-836868"
	E0910 19:11:26.942162       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:11:27.509195       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:11:41.653145       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="206.06µs"
	I0910 19:11:54.653200       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="49.485µs"
	E0910 19:11:56.948946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:11:57.515903       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:12:26.960521       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:12:27.523323       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:12:56.966942       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:12:57.531541       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:13:26.973926       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:13:27.539714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 19:00:24.224238       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 19:00:24.236833       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0910 19:00:24.236914       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 19:00:24.284599       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 19:00:24.284710       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 19:00:24.284816       1 server_linux.go:169] "Using iptables Proxier"
	I0910 19:00:24.293901       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 19:00:24.294210       1 server.go:483] "Version info" version="v1.31.0"
	I0910 19:00:24.294587       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:00:24.296547       1 config.go:197] "Starting service config controller"
	I0910 19:00:24.296706       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 19:00:24.296837       1 config.go:104] "Starting endpoint slice config controller"
	I0910 19:00:24.296868       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 19:00:24.297441       1 config.go:326] "Starting node config controller"
	I0910 19:00:24.297852       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 19:00:24.397380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 19:00:24.397445       1 shared_informer.go:320] Caches are synced for service config
	I0910 19:00:24.398821       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] <==
	I0910 19:00:21.147129       1 serving.go:386] Generated self-signed cert in-memory
	W0910 19:00:23.305380       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 19:00:23.305626       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 19:00:23.305728       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 19:00:23.305760       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 19:00:23.351208       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 19:00:23.351294       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:00:23.353390       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 19:00:23.353609       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 19:00:23.353657       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 19:00:23.353691       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 19:00:23.453867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 19:12:45 embed-certs-836868 kubelet[914]: E0910 19:12:45.637066     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:12:48 embed-certs-836868 kubelet[914]: E0910 19:12:48.811410     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995568811103429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:48 embed-certs-836868 kubelet[914]: E0910 19:12:48.811515     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995568811103429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:56 embed-certs-836868 kubelet[914]: E0910 19:12:56.638569     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:12:58 embed-certs-836868 kubelet[914]: E0910 19:12:58.814741     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995578814043377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:12:58 embed-certs-836868 kubelet[914]: E0910 19:12:58.815114     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995578814043377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:08 embed-certs-836868 kubelet[914]: E0910 19:13:08.816672     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995588816266928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:08 embed-certs-836868 kubelet[914]: E0910 19:13:08.816716     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995588816266928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:11 embed-certs-836868 kubelet[914]: E0910 19:13:11.637411     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:13:18 embed-certs-836868 kubelet[914]: E0910 19:13:18.652024     914 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:13:18 embed-certs-836868 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:13:18 embed-certs-836868 kubelet[914]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:13:18 embed-certs-836868 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:13:18 embed-certs-836868 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:13:18 embed-certs-836868 kubelet[914]: E0910 19:13:18.818612     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995598818148369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:18 embed-certs-836868 kubelet[914]: E0910 19:13:18.818636     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995598818148369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:24 embed-certs-836868 kubelet[914]: E0910 19:13:24.641042     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:13:28 embed-certs-836868 kubelet[914]: E0910 19:13:28.821044     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995608820414924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:28 embed-certs-836868 kubelet[914]: E0910 19:13:28.821662     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995608820414924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:38 embed-certs-836868 kubelet[914]: E0910 19:13:38.823707     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995618823151800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:38 embed-certs-836868 kubelet[914]: E0910 19:13:38.823964     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995618823151800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:39 embed-certs-836868 kubelet[914]: E0910 19:13:39.638896     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:13:48 embed-certs-836868 kubelet[914]: E0910 19:13:48.826158     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995628825436547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:48 embed-certs-836868 kubelet[914]: E0910 19:13:48.826449     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995628825436547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:13:50 embed-certs-836868 kubelet[914]: E0910 19:13:50.640161     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	
	
	==> storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] <==
	I0910 19:00:54.952708       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 19:00:54.964116       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 19:00:54.964237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 19:01:12.361687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 19:01:12.361936       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-836868_faadb906-17fd-49ac-9744-22e8f8266142!
	I0910 19:01:12.362797       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3df91385-4ac8-4599-b951-2ed815b06ad9", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-836868_faadb906-17fd-49ac-9744-22e8f8266142 became leader
	I0910 19:01:12.462618       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-836868_faadb906-17fd-49ac-9744-22e8f8266142!
	
	
	==> storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] <==
	I0910 19:00:24.144024       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0910 19:00:54.146804       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-836868 -n embed-certs-836868
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-836868 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-26knw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-836868 describe pod metrics-server-6867b74b74-26knw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-836868 describe pod metrics-server-6867b74b74-26knw: exit status 1 (65.770558ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-26knw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-836868 describe pod metrics-server-6867b74b74-26knw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:08:27.844742   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:08:38.728773   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:08:43.066485   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:08:56.538505   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:09:18.935490   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:09:23.978995   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:09:38.244554   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:09:38.788225   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:10:06.130227   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:10:38.987086   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:10:47.042193   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:11:01.852477   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:11:35.171069   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:12:02.050182   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:12:04.782117   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:12:15.664189   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:12:55.870136   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:13:43.066046   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:13:56.538403   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:14:23.979441   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:14:38.787963   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:15:38.986353   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:16:35.171877   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:16:59.611122   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:17:04.782410   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 2 (219.561025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-432422" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 2 (222.824797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-432422 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-432422 logs -n 25: (1.546166384s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-642043 sudo cat                              | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo find                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo crio                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-642043                                       | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-186737 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | disable-driver-mounts-186737                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-836868            | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-347802             | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-557504  | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-836868                 | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-432422        | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-347802                  | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-557504       | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-432422             | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:56:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:56:02.487676   72122 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:56:02.487789   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487799   72122 out.go:358] Setting ErrFile to fd 2...
	I0910 18:56:02.487804   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487953   72122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:56:02.488491   72122 out.go:352] Setting JSON to false
	I0910 18:56:02.489572   72122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5914,"bootTime":1725988648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:56:02.489637   72122 start.go:139] virtualization: kvm guest
	I0910 18:56:02.491991   72122 out.go:177] * [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:56:02.493117   72122 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:56:02.493113   72122 notify.go:220] Checking for updates...
	I0910 18:56:02.494213   72122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:56:02.495356   72122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:56:02.496370   72122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:56:02.497440   72122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:56:02.498703   72122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:56:02.500450   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:56:02.501100   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.501150   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.515836   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0910 18:56:02.516286   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.516787   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.516815   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.517116   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.517300   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.519092   72122 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 18:56:02.520121   72122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:56:02.520405   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.520436   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.534860   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I0910 18:56:02.535243   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.535688   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.535711   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.536004   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.536215   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.570682   72122 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:56:02.571710   72122 start.go:297] selected driver: kvm2
	I0910 18:56:02.571722   72122 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.571821   72122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:56:02.572465   72122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.572528   72122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:56:02.587001   72122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:56:02.587381   72122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:56:02.587417   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:56:02.587427   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:56:02.587471   72122 start.go:340] cluster config:
	{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.587599   72122 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.589116   72122 out.go:177] * Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	I0910 18:56:02.590155   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:56:02.590185   72122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:56:02.590194   72122 cache.go:56] Caching tarball of preloaded images
	I0910 18:56:02.590294   72122 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:56:02.590313   72122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0910 18:56:02.590415   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:56:02.590612   72122 start.go:360] acquireMachinesLock for old-k8s-version-432422: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:56:08.097313   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:11.169360   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:17.249255   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:20.321326   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:26.401359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:29.473351   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:35.553474   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:38.625322   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:44.705324   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:47.777408   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:53.857373   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:56.929356   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:03.009354   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:06.081346   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:12.161342   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:15.233363   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:21.313385   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:24.385281   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:30.465347   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:33.537368   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:39.617395   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:42.689359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:48.769334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:51.841388   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:57.921359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:00.993375   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:07.073343   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:10.145433   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:16.225336   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:19.297345   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:25.377291   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:28.449365   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:34.529306   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:37.601300   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:43.681334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:46.753328   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:49.757234   71529 start.go:364] duration metric: took 4m17.481092907s to acquireMachinesLock for "no-preload-347802"
	I0910 18:58:49.757299   71529 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:58:49.757316   71529 fix.go:54] fixHost starting: 
	I0910 18:58:49.757667   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:58:49.757694   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:58:49.772681   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0910 18:58:49.773067   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:58:49.773498   71529 main.go:141] libmachine: Using API Version  1
	I0910 18:58:49.773518   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:58:49.773963   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:58:49.774127   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:58:49.774279   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 18:58:49.775704   71529 fix.go:112] recreateIfNeeded on no-preload-347802: state=Stopped err=<nil>
	I0910 18:58:49.775726   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	W0910 18:58:49.775886   71529 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:58:49.777669   71529 out.go:177] * Restarting existing kvm2 VM for "no-preload-347802" ...
	I0910 18:58:49.778739   71529 main.go:141] libmachine: (no-preload-347802) Calling .Start
	I0910 18:58:49.778882   71529 main.go:141] libmachine: (no-preload-347802) Ensuring networks are active...
	I0910 18:58:49.779509   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network default is active
	I0910 18:58:49.779824   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network mk-no-preload-347802 is active
	I0910 18:58:49.780121   71529 main.go:141] libmachine: (no-preload-347802) Getting domain xml...
	I0910 18:58:49.780766   71529 main.go:141] libmachine: (no-preload-347802) Creating domain...
	I0910 18:58:50.967405   71529 main.go:141] libmachine: (no-preload-347802) Waiting to get IP...
	I0910 18:58:50.968284   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:50.968647   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:50.968726   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:50.968628   72707 retry.go:31] will retry after 197.094328ms: waiting for machine to come up
	I0910 18:58:51.167237   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.167630   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.167683   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.167603   72707 retry.go:31] will retry after 272.376855ms: waiting for machine to come up
	I0910 18:58:51.441212   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.441673   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.441698   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.441636   72707 retry.go:31] will retry after 458.172114ms: waiting for machine to come up
	I0910 18:58:51.900991   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.901464   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.901498   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.901428   72707 retry.go:31] will retry after 442.42629ms: waiting for machine to come up
	I0910 18:58:49.754913   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:58:49.754977   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755310   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 18:58:49.755335   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755513   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 18:58:49.757052   71183 machine.go:96] duration metric: took 4m37.423474417s to provisionDockerMachine
	I0910 18:58:49.757138   71183 fix.go:56] duration metric: took 4m37.44458491s for fixHost
	I0910 18:58:49.757149   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 4m37.444613055s
	W0910 18:58:49.757173   71183 start.go:714] error starting host: provision: host is not running
	W0910 18:58:49.757263   71183 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0910 18:58:49.757273   71183 start.go:729] Will try again in 5 seconds ...
	I0910 18:58:52.345053   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:52.345519   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:52.345540   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:52.345463   72707 retry.go:31] will retry after 732.353971ms: waiting for machine to come up
	I0910 18:58:53.079229   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.079686   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.079714   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.079638   72707 retry.go:31] will retry after 658.057224ms: waiting for machine to come up
	I0910 18:58:53.739313   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.739750   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.739811   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.739732   72707 retry.go:31] will retry after 910.559952ms: waiting for machine to come up
	I0910 18:58:54.651714   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:54.652075   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:54.652099   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:54.652027   72707 retry.go:31] will retry after 1.410431493s: waiting for machine to come up
	I0910 18:58:56.063996   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:56.064396   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:56.064418   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:56.064360   72707 retry.go:31] will retry after 1.795467467s: waiting for machine to come up
	I0910 18:58:54.759533   71183 start.go:360] acquireMachinesLock for embed-certs-836868: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:58:57.862130   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:57.862484   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:57.862509   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:57.862453   72707 retry.go:31] will retry after 1.450403908s: waiting for machine to come up
	I0910 18:58:59.315197   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:59.315621   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:59.315657   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:59.315566   72707 retry.go:31] will retry after 1.81005281s: waiting for machine to come up
	I0910 18:59:01.128164   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:01.128611   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:01.128642   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:01.128563   72707 retry.go:31] will retry after 3.333505805s: waiting for machine to come up
	I0910 18:59:04.464526   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:04.465004   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:04.465030   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:04.464951   72707 retry.go:31] will retry after 3.603817331s: waiting for machine to come up
	I0910 18:59:09.257584   71627 start.go:364] duration metric: took 4m27.770499275s to acquireMachinesLock for "default-k8s-diff-port-557504"
	I0910 18:59:09.257656   71627 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:09.257673   71627 fix.go:54] fixHost starting: 
	I0910 18:59:09.258100   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:09.258144   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:09.276230   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0910 18:59:09.276622   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:09.277129   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:09.277151   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:09.277489   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:09.277663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:09.277793   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:09.279006   71627 fix.go:112] recreateIfNeeded on default-k8s-diff-port-557504: state=Stopped err=<nil>
	I0910 18:59:09.279043   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	W0910 18:59:09.279178   71627 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:09.281106   71627 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-557504" ...
	I0910 18:59:08.073057   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073638   71529 main.go:141] libmachine: (no-preload-347802) Found IP for machine: 192.168.50.138
	I0910 18:59:08.073660   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has current primary IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073666   71529 main.go:141] libmachine: (no-preload-347802) Reserving static IP address...
	I0910 18:59:08.074129   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.074153   71529 main.go:141] libmachine: (no-preload-347802) Reserved static IP address: 192.168.50.138
	I0910 18:59:08.074170   71529 main.go:141] libmachine: (no-preload-347802) DBG | skip adding static IP to network mk-no-preload-347802 - found existing host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"}
	I0910 18:59:08.074179   71529 main.go:141] libmachine: (no-preload-347802) Waiting for SSH to be available...
	I0910 18:59:08.074187   71529 main.go:141] libmachine: (no-preload-347802) DBG | Getting to WaitForSSH function...
	I0910 18:59:08.076434   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076744   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.076767   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076928   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH client type: external
	I0910 18:59:08.076950   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa (-rw-------)
	I0910 18:59:08.076979   71529 main.go:141] libmachine: (no-preload-347802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:08.076992   71529 main.go:141] libmachine: (no-preload-347802) DBG | About to run SSH command:
	I0910 18:59:08.077029   71529 main.go:141] libmachine: (no-preload-347802) DBG | exit 0
	I0910 18:59:08.201181   71529 main.go:141] libmachine: (no-preload-347802) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:08.201561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetConfigRaw
	I0910 18:59:08.202195   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.204390   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204639   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.204676   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204932   71529 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/config.json ...
	I0910 18:59:08.205227   71529 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:08.205245   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:08.205464   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.207451   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207833   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.207862   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207956   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.208120   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208402   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.208584   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.208811   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.208826   71529 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:08.317392   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:08.317421   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317693   71529 buildroot.go:166] provisioning hostname "no-preload-347802"
	I0910 18:59:08.317721   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317870   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.320440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320749   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.320777   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320922   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.321092   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321295   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.321607   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.321764   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.321778   71529 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-347802 && echo "no-preload-347802" | sudo tee /etc/hostname
	I0910 18:59:08.442907   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-347802
	
	I0910 18:59:08.442932   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.445449   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445743   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.445769   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445930   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.446135   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446308   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446461   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.446642   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.446831   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.446853   71529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-347802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-347802/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-347802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:08.561710   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:08.561738   71529 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:08.561760   71529 buildroot.go:174] setting up certificates
	I0910 18:59:08.561771   71529 provision.go:84] configureAuth start
	I0910 18:59:08.561782   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.562065   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.564917   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565296   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.565318   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565468   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.567579   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567883   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.567909   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567998   71529 provision.go:143] copyHostCerts
	I0910 18:59:08.568062   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:08.568074   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:08.568155   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:08.568259   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:08.568269   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:08.568297   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:08.568362   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:08.568369   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:08.568398   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:08.568457   71529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.no-preload-347802 san=[127.0.0.1 192.168.50.138 localhost minikube no-preload-347802]
	I0910 18:59:08.635212   71529 provision.go:177] copyRemoteCerts
	I0910 18:59:08.635296   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:08.635321   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.637851   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638202   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.638227   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638392   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.638561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.638727   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.638850   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:08.723477   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:08.747854   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 18:59:08.770184   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:08.792105   71529 provision.go:87] duration metric: took 230.324534ms to configureAuth
	I0910 18:59:08.792125   71529 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:08.792306   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:08.792389   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.795139   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795414   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.795440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795580   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.795767   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.795931   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.796075   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.796201   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.796385   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.796404   71529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:09.021498   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:09.021530   71529 machine.go:96] duration metric: took 816.290576ms to provisionDockerMachine
	I0910 18:59:09.021540   71529 start.go:293] postStartSetup for "no-preload-347802" (driver="kvm2")
	I0910 18:59:09.021566   71529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:09.021587   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.021923   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:09.021951   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.024598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.024935   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.024965   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.025210   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.025416   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.025598   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.025747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.107986   71529 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:09.111947   71529 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:09.111967   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:09.112028   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:09.112098   71529 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:09.112184   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:09.121734   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:09.144116   71529 start.go:296] duration metric: took 122.562738ms for postStartSetup
	I0910 18:59:09.144159   71529 fix.go:56] duration metric: took 19.386851685s for fixHost
	I0910 18:59:09.144183   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.146816   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147237   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.147278   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147396   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.147583   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147754   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147886   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.148060   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:09.148274   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:09.148285   71529 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:09.257433   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994749.232014074
	
	I0910 18:59:09.257456   71529 fix.go:216] guest clock: 1725994749.232014074
	I0910 18:59:09.257463   71529 fix.go:229] Guest: 2024-09-10 18:59:09.232014074 +0000 UTC Remote: 2024-09-10 18:59:09.144164668 +0000 UTC m=+277.006797443 (delta=87.849406ms)
	I0910 18:59:09.257478   71529 fix.go:200] guest clock delta is within tolerance: 87.849406ms
	I0910 18:59:09.257491   71529 start.go:83] releasing machines lock for "no-preload-347802", held for 19.50021281s
	I0910 18:59:09.257522   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.257777   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:09.260357   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260690   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.260715   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260895   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261369   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261545   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261631   71529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:09.261681   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.261749   71529 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:09.261774   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.264296   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264630   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.264650   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264907   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.264992   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.265020   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.265067   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265189   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.265266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265342   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265400   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.265470   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265602   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.367236   71529 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:09.373255   71529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:09.513271   71529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:09.519091   71529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:09.519153   71529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:09.534617   71529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:09.534639   71529 start.go:495] detecting cgroup driver to use...
	I0910 18:59:09.534698   71529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:09.551186   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:09.565123   71529 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:09.565193   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:09.578892   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:09.592571   71529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:09.700953   71529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:09.831175   71529 docker.go:233] disabling docker service ...
	I0910 18:59:09.831245   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:09.845755   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:09.858961   71529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:10.008707   71529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:10.144588   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:10.158486   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:10.176399   71529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:10.176456   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.186448   71529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:10.186511   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.196600   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.206639   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.216913   71529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:10.227030   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.237962   71529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.255181   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.265618   71529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:10.275659   71529 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:10.275713   71529 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:10.288712   71529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:10.301886   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:10.415847   71529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:10.500738   71529 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:10.500829   71529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:10.506564   71529 start.go:563] Will wait 60s for crictl version
	I0910 18:59:10.506620   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.510639   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:10.553929   71529 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:10.554034   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.582508   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.622516   71529 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:09.282182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Start
	I0910 18:59:09.282345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring networks are active...
	I0910 18:59:09.282958   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network default is active
	I0910 18:59:09.283450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network mk-default-k8s-diff-port-557504 is active
	I0910 18:59:09.283810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Getting domain xml...
	I0910 18:59:09.284454   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Creating domain...
	I0910 18:59:10.513168   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting to get IP...
	I0910 18:59:10.514173   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514681   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.514587   72843 retry.go:31] will retry after 228.672382ms: waiting for machine to come up
	I0910 18:59:10.745046   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745508   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.745440   72843 retry.go:31] will retry after 329.196616ms: waiting for machine to come up
	I0910 18:59:11.075777   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076237   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076269   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.076188   72843 retry.go:31] will retry after 317.98463ms: waiting for machine to come up
	I0910 18:59:10.623864   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:10.626709   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627042   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:10.627084   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627336   71529 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:10.631579   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:10.644077   71529 kubeadm.go:883] updating cluster {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:10.644183   71529 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:10.644215   71529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:10.679225   71529 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:10.679247   71529 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:10.679332   71529 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.679346   71529 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.679384   71529 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0910 18:59:10.679395   71529 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.679472   71529 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.679336   71529 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.681147   71529 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.681183   71529 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.681196   71529 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.681189   71529 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.681232   71529 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.681304   71529 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.841312   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.848638   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.872351   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.875581   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.882457   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.894360   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0910 18:59:10.895305   71529 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0910 18:59:10.895341   71529 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.895379   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.898460   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.953614   71529 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0910 18:59:10.953659   71529 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.953706   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042770   71529 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0910 18:59:11.042837   71529 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0910 18:59:11.042862   71529 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.042873   71529 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042820   71529 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0910 18:59:11.043065   71529 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.043097   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.129993   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.130090   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.130018   71529 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0910 18:59:11.130143   71529 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.130187   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.130189   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.130206   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.130271   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.239573   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.239626   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.241780   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.241795   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.241853   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.241883   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.360008   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.360027   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.360067   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.371623   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.480504   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0910 18:59:11.480591   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.480615   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.480635   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0910 18:59:11.480725   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:11.488248   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.510860   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0910 18:59:11.510950   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0910 18:59:11.510959   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:11.511032   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:11.514065   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0910 18:59:11.514136   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:11.555358   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0910 18:59:11.555425   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0910 18:59:11.555445   71529 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555465   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:11.555491   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555497   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0910 18:59:11.578210   71529 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0910 18:59:11.578227   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0910 18:59:11.578258   71529 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.578273   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0910 18:59:11.578306   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.578345   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0910 18:59:11.578310   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0910 18:59:11.395907   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396361   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396389   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.396320   72843 retry.go:31] will retry after 511.273215ms: waiting for machine to come up
	I0910 18:59:11.909582   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910012   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910041   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.909957   72843 retry.go:31] will retry after 712.801984ms: waiting for machine to come up
	I0910 18:59:12.624608   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625042   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625083   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:12.625014   72843 retry.go:31] will retry after 873.57855ms: waiting for machine to come up
	I0910 18:59:13.499767   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500117   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500144   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:13.500071   72843 retry.go:31] will retry after 1.180667971s: waiting for machine to come up
	I0910 18:59:14.682848   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683351   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683381   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:14.683297   72843 retry.go:31] will retry after 1.211684184s: waiting for machine to come up
	I0910 18:59:15.896172   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896651   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:15.896597   72843 retry.go:31] will retry after 1.541313035s: waiting for machine to come up
	I0910 18:59:13.534642   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978971061s)
	I0910 18:59:13.534680   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0910 18:59:13.534686   71529 ssh_runner.go:235] Completed: which crictl: (1.956359959s)
	I0910 18:59:13.534704   71529 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.534753   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:13.534754   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.580670   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.439293   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439652   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:17.439607   72843 retry.go:31] will retry after 2.232253017s: waiting for machine to come up
	I0910 18:59:19.673727   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:19.674070   72843 retry.go:31] will retry after 2.324233118s: waiting for machine to come up
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.644871938s)
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690724664s)
	I0910 18:59:17.225647   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0910 18:59:17.225671   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.225676   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:17.225702   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:19.705947   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.48021773s)
	I0910 18:59:19.705982   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0910 18:59:19.706006   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706045   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.480359026s)
	I0910 18:59:19.706069   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706098   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 18:59:19.706176   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:21.666588   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960494926s)
	I0910 18:59:21.666623   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0910 18:59:21.666640   71529 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.960446302s)
	I0910 18:59:21.666648   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:21.666666   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0910 18:59:21.666699   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:22.000591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001014   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001047   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:22.000951   72843 retry.go:31] will retry after 3.327224401s: waiting for machine to come up
	I0910 18:59:25.329967   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330414   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330445   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:25.330367   72843 retry.go:31] will retry after 3.45596573s: waiting for machine to come up
	I0910 18:59:23.216195   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.549470753s)
	I0910 18:59:23.216223   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0910 18:59:23.216243   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:23.216286   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:25.077483   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.861176975s)
	I0910 18:59:25.077515   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0910 18:59:25.077547   71529 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.077640   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.919427   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0910 18:59:25.919478   71529 cache_images.go:123] Successfully loaded all cached images
	I0910 18:59:25.919486   71529 cache_images.go:92] duration metric: took 15.240223152s to LoadCachedImages
	I0910 18:59:25.919502   71529 kubeadm.go:934] updating node { 192.168.50.138 8443 v1.31.0 crio true true} ...
	I0910 18:59:25.919622   71529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-347802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:25.919710   71529 ssh_runner.go:195] Run: crio config
	I0910 18:59:25.964461   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:25.964489   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:25.964509   71529 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:25.964535   71529 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-347802 NodeName:no-preload-347802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:25.964698   71529 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-347802"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:25.964780   71529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:25.975304   71529 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:25.975371   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:25.985124   71529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 18:59:26.003355   71529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:26.020117   71529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0910 18:59:26.037026   71529 ssh_runner.go:195] Run: grep 192.168.50.138	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:26.041140   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:26.053643   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:26.175281   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:26.193153   71529 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802 for IP: 192.168.50.138
	I0910 18:59:26.193181   71529 certs.go:194] generating shared ca certs ...
	I0910 18:59:26.193203   71529 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:26.193398   71529 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:26.193452   71529 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:26.193466   71529 certs.go:256] generating profile certs ...
	I0910 18:59:26.193582   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/client.key
	I0910 18:59:26.193664   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key.93ff3787
	I0910 18:59:26.193722   71529 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key
	I0910 18:59:26.193871   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:26.193924   71529 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:26.193978   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:26.194026   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:26.194053   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:26.194083   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:26.194132   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:26.194868   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:26.231957   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:26.280213   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:26.310722   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:26.347855   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:59:26.386495   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:26.411742   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:26.435728   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:59:26.460305   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:26.484974   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:26.508782   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:26.531397   71529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:26.548219   71529 ssh_runner.go:195] Run: openssl version
	I0910 18:59:26.553969   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:26.564950   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569539   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569594   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.575677   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:26.586342   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:26.606946   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611671   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611720   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.617271   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:26.627833   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:26.638225   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642722   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642759   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.648359   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:26.659003   71529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:26.663236   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:26.668896   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:26.674346   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:26.680028   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:26.685462   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:26.691097   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:26.696620   71529 kubeadm.go:392] StartCluster: {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:26.696704   71529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:26.696746   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.733823   71529 cri.go:89] found id: ""
	I0910 18:59:26.733883   71529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:26.744565   71529 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:26.744584   71529 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:26.744620   71529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:26.754754   71529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:26.755687   71529 kubeconfig.go:125] found "no-preload-347802" server: "https://192.168.50.138:8443"
	I0910 18:59:26.757732   71529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:26.767140   71529 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.138
	I0910 18:59:26.767167   71529 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:26.767180   71529 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:26.767235   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.805555   71529 cri.go:89] found id: ""
	I0910 18:59:26.805616   71529 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:26.822806   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:26.832434   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:26.832456   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:26.832499   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:26.841225   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:26.841288   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:26.850145   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:26.859016   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:26.859070   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:26.868806   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.877814   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:26.877867   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.886985   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:26.895859   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:26.895911   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:26.905600   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:26.915716   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:27.038963   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:30.202285   72122 start.go:364] duration metric: took 3m27.611616445s to acquireMachinesLock for "old-k8s-version-432422"
	I0910 18:59:30.202346   72122 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:30.202377   72122 fix.go:54] fixHost starting: 
	I0910 18:59:30.202807   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:30.202842   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:30.222440   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0910 18:59:30.222927   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:30.223415   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:59:30.223435   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:30.223748   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:30.223905   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:30.224034   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetState
	I0910 18:59:30.225464   72122 fix.go:112] recreateIfNeeded on old-k8s-version-432422: state=Stopped err=<nil>
	I0910 18:59:30.225505   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	W0910 18:59:30.225655   72122 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:30.227698   72122 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-432422" ...
	I0910 18:59:28.790020   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790390   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Found IP for machine: 192.168.72.54
	I0910 18:59:28.790424   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has current primary IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790435   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserving static IP address...
	I0910 18:59:28.790758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserved static IP address: 192.168.72.54
	I0910 18:59:28.790780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for SSH to be available...
	I0910 18:59:28.790811   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.790839   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | skip adding static IP to network mk-default-k8s-diff-port-557504 - found existing host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"}
	I0910 18:59:28.790856   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Getting to WaitForSSH function...
	I0910 18:59:28.792644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.792947   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.792978   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.793114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH client type: external
	I0910 18:59:28.793135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa (-rw-------)
	I0910 18:59:28.793192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:28.793242   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | About to run SSH command:
	I0910 18:59:28.793272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | exit 0
	I0910 18:59:28.921644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:28.921983   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetConfigRaw
	I0910 18:59:28.922663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:28.925273   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925614   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.925639   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925884   71627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/config.json ...
	I0910 18:59:28.926061   71627 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:28.926077   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:28.926272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:28.928411   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928731   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.928758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928909   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:28.929096   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929249   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929371   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:28.929552   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:28.929722   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:28.929732   71627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:29.041454   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:29.041486   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041745   71627 buildroot.go:166] provisioning hostname "default-k8s-diff-port-557504"
	I0910 18:59:29.041766   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041965   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.044784   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.045182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045358   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.045528   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045705   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.045968   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.046158   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.046173   71627 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-557504 && echo "default-k8s-diff-port-557504" | sudo tee /etc/hostname
	I0910 18:59:29.180227   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-557504
	
	I0910 18:59:29.180257   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.182815   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183166   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.183200   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183416   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.183612   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183779   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183883   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.184053   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.184258   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.184276   71627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-557504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-557504/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-557504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:29.315908   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:29.315942   71627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:29.315981   71627 buildroot.go:174] setting up certificates
	I0910 18:59:29.315996   71627 provision.go:84] configureAuth start
	I0910 18:59:29.316013   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.316262   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:29.319207   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319580   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.319609   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.321973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322318   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.322352   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322499   71627 provision.go:143] copyHostCerts
	I0910 18:59:29.322564   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:29.322577   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:29.322647   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:29.322772   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:29.322786   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:29.322832   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:29.322938   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:29.322951   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:29.322986   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:29.323065   71627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-557504 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-557504 localhost minikube]
	I0910 18:59:29.488131   71627 provision.go:177] copyRemoteCerts
	I0910 18:59:29.488187   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:29.488210   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.491095   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491441   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.491467   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491666   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.491830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.491973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.492123   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:29.584016   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:29.614749   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0910 18:59:29.646904   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:29.677788   71627 provision.go:87] duration metric: took 361.777725ms to configureAuth
	I0910 18:59:29.677820   71627 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:29.678048   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:29.678135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.680932   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681372   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.681394   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681674   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.681868   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682175   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.682431   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.682638   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.682665   71627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:29.934027   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:29.934058   71627 machine.go:96] duration metric: took 1.007985288s to provisionDockerMachine
	I0910 18:59:29.934071   71627 start.go:293] postStartSetup for "default-k8s-diff-port-557504" (driver="kvm2")
	I0910 18:59:29.934084   71627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:29.934104   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:29.934415   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:29.934447   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.937552   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.937917   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.937948   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.938110   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.938315   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.938496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.938645   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.030842   71627 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:30.036158   71627 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:30.036180   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:30.036267   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:30.036380   71627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:30.036520   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:30.048860   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:30.075362   71627 start.go:296] duration metric: took 141.276186ms for postStartSetup
	I0910 18:59:30.075398   71627 fix.go:56] duration metric: took 20.817735357s for fixHost
	I0910 18:59:30.075421   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.078501   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.078996   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.079026   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.079195   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.079373   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079561   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079704   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.079908   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:30.080089   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:30.080102   71627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:30.202112   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994770.178719125
	
	I0910 18:59:30.202139   71627 fix.go:216] guest clock: 1725994770.178719125
	I0910 18:59:30.202149   71627 fix.go:229] Guest: 2024-09-10 18:59:30.178719125 +0000 UTC Remote: 2024-09-10 18:59:30.075402937 +0000 UTC m=+288.723404352 (delta=103.316188ms)
	I0910 18:59:30.202175   71627 fix.go:200] guest clock delta is within tolerance: 103.316188ms
	I0910 18:59:30.202184   71627 start.go:83] releasing machines lock for "default-k8s-diff-port-557504", held for 20.944552577s
	I0910 18:59:30.202221   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.202522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:30.205728   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206068   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.206101   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206267   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.206830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207100   71627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:30.207171   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.207378   71627 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:30.207399   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.209851   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210130   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210220   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210400   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210553   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210555   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210625   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210735   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210785   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.210849   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210949   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.211002   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.211132   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.317738   71627 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:30.325333   71627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:30.485483   71627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:30.492979   71627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:30.493064   71627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:30.518974   71627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:30.518998   71627 start.go:495] detecting cgroup driver to use...
	I0910 18:59:30.519192   71627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:30.539578   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:30.554986   71627 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:30.555045   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:30.570454   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:30.590125   71627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:30.738819   71627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:30.930750   71627 docker.go:233] disabling docker service ...
	I0910 18:59:30.930811   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:30.946226   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:30.961633   71627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:31.086069   71627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:31.208629   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:31.225988   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:31.248059   71627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:31.248127   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.260212   71627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:31.260296   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.271128   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.282002   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.296901   71627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:31.309739   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.325469   71627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.350404   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.366130   71627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:31.379206   71627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:31.379259   71627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:31.395015   71627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:31.406339   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:31.538783   71627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:31.656815   71627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:31.656886   71627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:31.665263   71627 start.go:563] Will wait 60s for crictl version
	I0910 18:59:31.665333   71627 ssh_runner.go:195] Run: which crictl
	I0910 18:59:31.670317   71627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:31.719549   71627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:31.719641   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.753801   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.787092   71627 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:28.257536   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.218537615s)
	I0910 18:59:28.257562   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.451173   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.516432   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.605746   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:28.605823   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.106870   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.606340   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.623814   71529 api_server.go:72] duration metric: took 1.018071553s to wait for apiserver process to appear ...
	I0910 18:59:29.623842   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:29.623864   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:29.624282   71529 api_server.go:269] stopped: https://192.168.50.138:8443/healthz: Get "https://192.168.50.138:8443/healthz": dial tcp 192.168.50.138:8443: connect: connection refused
	I0910 18:59:30.124145   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:30.228896   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .Start
	I0910 18:59:30.229066   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring networks are active...
	I0910 18:59:30.229735   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network default is active
	I0910 18:59:30.230126   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network mk-old-k8s-version-432422 is active
	I0910 18:59:30.230559   72122 main.go:141] libmachine: (old-k8s-version-432422) Getting domain xml...
	I0910 18:59:30.231206   72122 main.go:141] libmachine: (old-k8s-version-432422) Creating domain...
	I0910 18:59:31.669616   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting to get IP...
	I0910 18:59:31.670682   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.671124   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.671225   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.671101   72995 retry.go:31] will retry after 285.109621ms: waiting for machine to come up
	I0910 18:59:31.957711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.958140   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.958169   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.958103   72995 retry.go:31] will retry after 306.703176ms: waiting for machine to come up
	I0910 18:59:32.266797   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.267299   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.267333   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.267226   72995 retry.go:31] will retry after 327.953362ms: waiting for machine to come up
	I0910 18:59:32.494151   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.494177   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.494193   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.550283   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.550317   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.624486   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.646548   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:32.646583   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.124697   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.139775   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.139814   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.623998   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.632392   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.632430   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:34.123979   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:34.133552   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 18:59:34.143511   71529 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:34.143543   71529 api_server.go:131] duration metric: took 4.519693435s to wait for apiserver health ...
	I0910 18:59:34.143552   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:34.143558   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:34.145562   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:31.788472   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:31.791698   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792063   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:31.792102   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792342   71627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:31.798045   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:31.814552   71627 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:31.814718   71627 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:31.814775   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:31.863576   71627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:31.863655   71627 ssh_runner.go:195] Run: which lz4
	I0910 18:59:31.868776   71627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:31.874162   71627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:31.874194   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 18:59:33.358271   71627 crio.go:462] duration metric: took 1.489531006s to copy over tarball
	I0910 18:59:33.358356   71627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:35.759805   71627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.401424942s)
	I0910 18:59:35.759833   71627 crio.go:469] duration metric: took 2.401529016s to extract the tarball
	I0910 18:59:35.759842   71627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:35.797349   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:35.849544   71627 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:59:35.849571   71627 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:59:35.849583   71627 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.0 crio true true} ...
	I0910 18:59:35.849706   71627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-557504 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:35.849783   71627 ssh_runner.go:195] Run: crio config
	I0910 18:59:35.896486   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:35.896514   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:35.896534   71627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:35.896556   71627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-557504 NodeName:default-k8s-diff-port-557504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:35.896707   71627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-557504"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:35.896777   71627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:35.907249   71627 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:35.907337   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:35.917196   71627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0910 18:59:35.935072   71627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:35.953823   71627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0910 18:59:35.970728   71627 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:35.974648   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:35.986487   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:36.144443   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:36.164942   71627 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504 for IP: 192.168.72.54
	I0910 18:59:36.164972   71627 certs.go:194] generating shared ca certs ...
	I0910 18:59:36.164990   71627 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:36.165172   71627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:36.165242   71627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:36.165255   71627 certs.go:256] generating profile certs ...
	I0910 18:59:36.165382   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/client.key
	I0910 18:59:36.165460   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key.5cc31a18
	I0910 18:59:36.165505   71627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key
	I0910 18:59:36.165640   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:36.165680   71627 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:36.165700   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:36.165733   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:36.165770   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:36.165803   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:36.165874   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:36.166687   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:36.203302   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:36.230599   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:36.269735   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:36.311674   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0910 18:59:36.354614   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:59:36.379082   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:34.146903   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:34.163037   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:34.189830   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:34.200702   71529 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:34.200751   71529 system_pods.go:61] "coredns-6f6b679f8f-54rpl" [2e301d43-a54a-4836-abf8-a45f5bc15889] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:34.200762   71529 system_pods.go:61] "etcd-no-preload-347802" [0fdffb97-72c6-4588-9593-46bcbed0a9fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:34.200773   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [3cf5abac-1d94-4ee2-a962-9daad308ec8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:34.200782   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6769757d-57fd-46c8-8f78-d20f80e592d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:34.200788   71529 system_pods.go:61] "kube-proxy-7v9n8" [d01842ad-3dae-49e1-8570-db9bcf4d0afc] Running
	I0910 18:59:34.200797   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [20e59c6b-4387-4dd0-b242-78d107775275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:34.200804   71529 system_pods.go:61] "metrics-server-6867b74b74-w8rqv" [52535081-4503-4136-963d-6b2db6c0224e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:34.200809   71529 system_pods.go:61] "storage-provisioner" [9f7c0178-7194-4c73-95a4-5a3c0091f3ac] Running
	I0910 18:59:34.200816   71529 system_pods.go:74] duration metric: took 10.965409ms to wait for pod list to return data ...
	I0910 18:59:34.200857   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:34.204544   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:34.204568   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:34.204580   71529 node_conditions.go:105] duration metric: took 3.714534ms to run NodePressure ...
	I0910 18:59:34.204597   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:34.487106   71529 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491817   71529 kubeadm.go:739] kubelet initialised
	I0910 18:59:34.491838   71529 kubeadm.go:740] duration metric: took 4.708046ms waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491845   71529 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:34.496604   71529 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.501535   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501553   71529 pod_ready.go:82] duration metric: took 4.927724ms for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.501561   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501567   71529 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.505473   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505491   71529 pod_ready.go:82] duration metric: took 3.917111ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.505499   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505507   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.510025   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510043   71529 pod_ready.go:82] duration metric: took 4.522609ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.510050   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510056   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:36.519023   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:32.597017   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.597589   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.597616   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.597554   72995 retry.go:31] will retry after 448.654363ms: waiting for machine to come up
	I0910 18:59:33.048100   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.048559   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.048590   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.048478   72995 retry.go:31] will retry after 654.829574ms: waiting for machine to come up
	I0910 18:59:33.704902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.705446   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.705475   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.705363   72995 retry.go:31] will retry after 610.514078ms: waiting for machine to come up
	I0910 18:59:34.316978   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:34.317481   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:34.317503   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:34.317430   72995 retry.go:31] will retry after 1.125805817s: waiting for machine to come up
	I0910 18:59:35.444880   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:35.445369   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:35.445394   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:35.445312   72995 retry.go:31] will retry after 1.484426931s: waiting for machine to come up
	I0910 18:59:36.931028   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:36.931568   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:36.931613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:36.931524   72995 retry.go:31] will retry after 1.819998768s: waiting for machine to come up
	I0910 18:59:36.403353   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:36.427345   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:36.452765   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:36.485795   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:36.512944   71627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:36.532454   71627 ssh_runner.go:195] Run: openssl version
	I0910 18:59:36.538449   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:36.550806   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555761   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555819   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.562430   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:36.573730   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:36.584987   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589551   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589615   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.595496   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:36.607821   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:36.620298   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624888   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624939   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.630534   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:36.641657   71627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:36.646317   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:36.652748   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:36.661166   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:36.670240   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:36.676776   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:36.686442   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:36.693233   71627 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:36.693351   71627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:36.693414   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.743159   71627 cri.go:89] found id: ""
	I0910 18:59:36.743256   71627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:36.754428   71627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:36.754451   71627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:36.754505   71627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:36.765126   71627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:36.766213   71627 kubeconfig.go:125] found "default-k8s-diff-port-557504" server: "https://192.168.72.54:8444"
	I0910 18:59:36.768428   71627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:36.778678   71627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I0910 18:59:36.778715   71627 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:36.778728   71627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:36.778779   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.824031   71627 cri.go:89] found id: ""
	I0910 18:59:36.824107   71627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:36.840585   71627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:36.851445   71627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:36.851462   71627 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:36.851508   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0910 18:59:36.860630   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:36.860682   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:36.869973   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0910 18:59:36.880034   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:36.880099   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:36.889684   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.898786   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:36.898870   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.908328   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0910 18:59:36.917272   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:36.917334   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:36.928923   71627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:36.940238   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.079143   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.945317   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.157807   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.245283   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.353653   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:38.353746   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:38.854791   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.354743   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.409511   71627 api_server.go:72] duration metric: took 1.055855393s to wait for apiserver process to appear ...
	I0910 18:59:39.409543   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:39.409566   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.410104   71627 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I0910 18:59:39.909665   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.018802   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:41.517911   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:38.753463   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:38.754076   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:38.754107   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:38.754019   72995 retry.go:31] will retry after 2.258214375s: waiting for machine to come up
	I0910 18:59:41.013524   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:41.013988   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:41.014011   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:41.013910   72995 retry.go:31] will retry after 2.030553777s: waiting for machine to come up
	I0910 18:59:41.976133   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:41.976166   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:41.976179   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.080631   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.080674   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.409865   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.421093   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.421174   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.910272   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.914729   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.914757   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:43.410280   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:43.414731   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 18:59:43.421135   71627 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:43.421163   71627 api_server.go:131] duration metric: took 4.011612782s to wait for apiserver health ...
	I0910 18:59:43.421172   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:43.421178   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:43.423063   71627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:43.424278   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:43.434823   71627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:43.461604   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:43.477566   71627 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:43.477592   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:43.477600   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:43.477606   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:43.477616   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:43.477623   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 18:59:43.477631   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:43.477638   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:43.477648   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 18:59:43.477658   71627 system_pods.go:74] duration metric: took 16.035701ms to wait for pod list to return data ...
	I0910 18:59:43.477673   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:43.485818   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:43.485840   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:43.485850   71627 node_conditions.go:105] duration metric: took 8.173642ms to run NodePressure ...
	I0910 18:59:43.485864   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:43.752422   71627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756713   71627 kubeadm.go:739] kubelet initialised
	I0910 18:59:43.756735   71627 kubeadm.go:740] duration metric: took 4.285787ms waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756744   71627 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:43.762384   71627 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.767080   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767099   71627 pod_ready.go:82] duration metric: took 4.695864ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.767109   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767116   71627 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.772560   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772579   71627 pod_ready.go:82] duration metric: took 5.453737ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.772588   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772593   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.776328   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776345   71627 pod_ready.go:82] duration metric: took 3.745149ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.776352   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776357   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.865825   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865850   71627 pod_ready.go:82] duration metric: took 89.48636ms for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.865862   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865868   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.264892   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264922   71627 pod_ready.go:82] duration metric: took 399.047611ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.264932   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264938   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.665376   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665402   71627 pod_ready.go:82] duration metric: took 400.457184ms for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.665413   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665418   71627 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:45.065696   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065724   71627 pod_ready.go:82] duration metric: took 400.298527ms for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:45.065736   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065743   71627 pod_ready.go:39] duration metric: took 1.308988307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:45.065759   71627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:59:45.077813   71627 ops.go:34] apiserver oom_adj: -16
	I0910 18:59:45.077838   71627 kubeadm.go:597] duration metric: took 8.323378955s to restartPrimaryControlPlane
	I0910 18:59:45.077846   71627 kubeadm.go:394] duration metric: took 8.384626167s to StartCluster
	I0910 18:59:45.077860   71627 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.077980   71627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:45.079979   71627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.080304   71627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 18:59:45.080399   71627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 18:59:45.080478   71627 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080510   71627 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080506   71627 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-557504"
	W0910 18:59:45.080523   71627 addons.go:243] addon storage-provisioner should already be in state true
	I0910 18:59:45.080519   71627 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080553   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080568   71627 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080568   71627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-557504"
	W0910 18:59:45.080582   71627 addons.go:243] addon metrics-server should already be in state true
	I0910 18:59:45.080529   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:45.080608   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080906   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080932   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.080989   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080994   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081905   71627 out.go:177] * Verifying Kubernetes components...
	I0910 18:59:45.083206   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:45.096019   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0910 18:59:45.096288   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0910 18:59:45.096453   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096730   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096984   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097012   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097243   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097273   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097401   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.097596   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.097678   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0910 18:59:45.097693   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.098049   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.098464   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.098504   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.099185   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.099207   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.099592   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.100125   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.100166   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.101159   71627 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-557504"
	W0910 18:59:45.101175   71627 addons.go:243] addon default-storageclass should already be in state true
	I0910 18:59:45.101203   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.101501   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.101537   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.114823   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0910 18:59:45.115253   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.115363   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0910 18:59:45.115737   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.115759   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.115795   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.116106   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.116244   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.116270   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.116289   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.116696   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.117290   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.117327   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.117546   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0910 18:59:45.117879   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.118496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.118631   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.118643   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.118949   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.119107   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.120353   71627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 18:59:45.120775   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.121685   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 18:59:45.121699   71627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 18:59:45.121718   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.122500   71627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:45.123762   71627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.123778   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:59:45.123792   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.125345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.125926   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.126161   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.126357   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.125943   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.126548   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.126661   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.127075   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127507   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.127522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127675   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.127810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.127905   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.127997   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.132978   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0910 18:59:45.133303   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.133757   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.133779   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.134043   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.134188   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.135712   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.135917   71627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.135928   71627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:59:45.135938   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.138375   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138616   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.138629   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138768   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.138937   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.139054   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.139181   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.293036   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:45.311747   71627 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:45.425820   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 18:59:45.425852   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 18:59:45.430783   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.441452   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.481245   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 18:59:45.481268   71627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 18:59:45.573348   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:45.573373   71627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 18:59:45.634830   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:46.589194   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147713188s)
	I0910 18:59:46.589253   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589266   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589284   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589311   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589321   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158508631s)
	I0910 18:59:46.589343   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589355   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589723   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589729   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589730   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589736   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589738   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589741   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589751   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589752   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589761   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589774   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589816   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589755   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589852   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589961   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589971   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.590192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.590207   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.590220   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591675   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.591692   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591702   71627 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:46.595906   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.595921   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.596105   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.596126   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.598033   71627 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0910 18:59:44.023282   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:46.516768   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:47.016400   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.016423   71529 pod_ready.go:82] duration metric: took 12.506359172s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.016435   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020809   71529 pod_ready.go:93] pod "kube-proxy-7v9n8" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.020827   71529 pod_ready.go:82] duration metric: took 4.386051ms for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020836   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.046937   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:43.047363   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:43.047393   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:43.047314   72995 retry.go:31] will retry after 2.233047134s: waiting for machine to come up
	I0910 18:59:45.282610   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:45.283104   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:45.283133   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:45.283026   72995 retry.go:31] will retry after 4.238676711s: waiting for machine to come up
	I0910 18:59:51.182133   71183 start.go:364] duration metric: took 56.422548201s to acquireMachinesLock for "embed-certs-836868"
	I0910 18:59:51.182195   71183 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:51.182206   71183 fix.go:54] fixHost starting: 
	I0910 18:59:51.182600   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:51.182637   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:51.198943   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0910 18:59:51.199345   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:51.199803   71183 main.go:141] libmachine: Using API Version  1
	I0910 18:59:51.199828   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:51.200153   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:51.200364   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 18:59:51.200493   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 18:59:51.202100   71183 fix.go:112] recreateIfNeeded on embed-certs-836868: state=Stopped err=<nil>
	I0910 18:59:51.202123   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	W0910 18:59:51.202286   71183 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:51.204028   71183 out.go:177] * Restarting existing kvm2 VM for "embed-certs-836868" ...
	I0910 18:59:46.599125   71627 addons.go:510] duration metric: took 1.518742666s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0910 18:59:47.316003   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.316691   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.027374   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:49.027393   71529 pod_ready.go:82] duration metric: took 2.006551523s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:49.027403   71529 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:51.034568   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:51.205180   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Start
	I0910 18:59:51.205332   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring networks are active...
	I0910 18:59:51.205952   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network default is active
	I0910 18:59:51.206322   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network mk-embed-certs-836868 is active
	I0910 18:59:51.206717   71183 main.go:141] libmachine: (embed-certs-836868) Getting domain xml...
	I0910 18:59:51.207430   71183 main.go:141] libmachine: (embed-certs-836868) Creating domain...
	I0910 18:59:49.526000   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.526536   72122 main.go:141] libmachine: (old-k8s-version-432422) Found IP for machine: 192.168.61.51
	I0910 18:59:49.526558   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserving static IP address...
	I0910 18:59:49.526569   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has current primary IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.527018   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.527063   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | skip adding static IP to network mk-old-k8s-version-432422 - found existing host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"}
	I0910 18:59:49.527084   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserved static IP address: 192.168.61.51
	I0910 18:59:49.527099   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting for SSH to be available...
	I0910 18:59:49.527113   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Getting to WaitForSSH function...
	I0910 18:59:49.529544   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.529962   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.529987   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.530143   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH client type: external
	I0910 18:59:49.530170   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa (-rw-------)
	I0910 18:59:49.530195   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:49.530208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | About to run SSH command:
	I0910 18:59:49.530245   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | exit 0
	I0910 18:59:49.656944   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:49.657307   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:59:49.657926   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:49.660332   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660689   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.660711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660992   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:59:49.661238   72122 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:49.661259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:49.661480   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.663824   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.664236   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664370   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.664565   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664712   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664887   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.665103   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.665392   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.665406   72122 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:49.769433   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:49.769468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769716   72122 buildroot.go:166] provisioning hostname "old-k8s-version-432422"
	I0910 18:59:49.769740   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769918   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.772324   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772710   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.772736   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772875   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.773061   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773245   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773384   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.773554   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.773751   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.773764   72122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-432422 && echo "old-k8s-version-432422" | sudo tee /etc/hostname
	I0910 18:59:49.891230   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-432422
	
	I0910 18:59:49.891259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.894272   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894641   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.894683   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894820   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.894983   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895210   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.895330   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.895540   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.895559   72122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-432422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-432422/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-432422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:50.011767   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:50.011795   72122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:50.011843   72122 buildroot.go:174] setting up certificates
	I0910 18:59:50.011854   72122 provision.go:84] configureAuth start
	I0910 18:59:50.011866   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:50.012185   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:50.014947   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015352   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.015388   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015549   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.017712   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018002   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.018036   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018193   72122 provision.go:143] copyHostCerts
	I0910 18:59:50.018251   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:50.018265   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:50.018337   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:50.018481   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:50.018491   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:50.018513   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:50.018585   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:50.018594   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:50.018612   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:50.018667   72122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-432422 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-432422]
	I0910 18:59:50.528798   72122 provision.go:177] copyRemoteCerts
	I0910 18:59:50.528864   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:50.528900   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.532154   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532576   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.532613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532765   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.532995   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.533205   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.533370   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:50.620169   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0910 18:59:50.647163   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:50.679214   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:50.704333   72122 provision.go:87] duration metric: took 692.46607ms to configureAuth
	I0910 18:59:50.704360   72122 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:50.704545   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:59:50.704639   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.707529   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.707903   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.707931   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.708082   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.708297   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708463   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708641   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.708786   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:50.708954   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:50.708969   72122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:50.935375   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:50.935403   72122 machine.go:96] duration metric: took 1.274152353s to provisionDockerMachine
	I0910 18:59:50.935414   72122 start.go:293] postStartSetup for "old-k8s-version-432422" (driver="kvm2")
	I0910 18:59:50.935424   72122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:50.935448   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:50.935763   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:50.935796   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.938507   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.938865   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.938902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.939008   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.939198   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.939529   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.939689   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.024726   72122 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:51.029522   72122 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:51.029547   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:51.029632   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:51.029734   72122 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:51.029848   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:51.042454   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:51.068748   72122 start.go:296] duration metric: took 133.318275ms for postStartSetup
	I0910 18:59:51.068792   72122 fix.go:56] duration metric: took 20.866428313s for fixHost
	I0910 18:59:51.068816   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.071533   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.071894   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.071921   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.072072   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.072264   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072616   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.072784   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:51.072938   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:51.072948   72122 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:51.181996   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994791.151610055
	
	I0910 18:59:51.182016   72122 fix.go:216] guest clock: 1725994791.151610055
	I0910 18:59:51.182024   72122 fix.go:229] Guest: 2024-09-10 18:59:51.151610055 +0000 UTC Remote: 2024-09-10 18:59:51.068796263 +0000 UTC m=+228.614166738 (delta=82.813792ms)
	I0910 18:59:51.182048   72122 fix.go:200] guest clock delta is within tolerance: 82.813792ms
	I0910 18:59:51.182055   72122 start.go:83] releasing machines lock for "old-k8s-version-432422", held for 20.979733564s
	I0910 18:59:51.182094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.182331   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:51.184857   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185183   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.185212   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185346   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.185840   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186006   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186079   72122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:51.186143   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.186215   72122 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:51.186238   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.189304   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189674   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.189698   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189765   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189879   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190057   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190212   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190230   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.190255   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.190358   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.190470   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190652   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190817   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190948   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.296968   72122 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:51.303144   72122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:51.447027   72122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:51.454963   72122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:51.455032   72122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:51.474857   72122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:51.474882   72122 start.go:495] detecting cgroup driver to use...
	I0910 18:59:51.474957   72122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:51.490457   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:51.504502   72122 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:51.504569   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:51.523331   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:51.543438   72122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:51.678734   72122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:51.831736   72122 docker.go:233] disabling docker service ...
	I0910 18:59:51.831804   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:51.846805   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:51.865771   72122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:52.012922   72122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:52.161595   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:52.180034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:52.200984   72122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0910 18:59:52.201041   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.211927   72122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:52.211989   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.223601   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.234211   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.246209   72122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:52.264079   72122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:52.277144   72122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:52.277204   72122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:52.292683   72122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:52.304601   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:52.421971   72122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:52.544386   72122 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:52.544459   72122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:52.551436   72122 start.go:563] Will wait 60s for crictl version
	I0910 18:59:52.551487   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:52.555614   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:52.598031   72122 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:52.598128   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.629578   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.662403   72122 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0910 18:59:51.815436   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:52.816775   71627 node_ready.go:49] node "default-k8s-diff-port-557504" has status "Ready":"True"
	I0910 18:59:52.816809   71627 node_ready.go:38] duration metric: took 7.505015999s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:52.816821   71627 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:52.823528   71627 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829667   71627 pod_ready.go:93] pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.829688   71627 pod_ready.go:82] duration metric: took 6.135159ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829696   71627 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833912   71627 pod_ready.go:93] pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.833933   71627 pod_ready.go:82] duration metric: took 4.231672ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833942   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838863   71627 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.838883   71627 pod_ready.go:82] duration metric: took 4.934379ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838897   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851413   71627 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:53.851437   71627 pod_ready.go:82] duration metric: took 1.012531075s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851447   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020886   71627 pod_ready.go:93] pod "kube-proxy-4t8r9" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:54.020910   71627 pod_ready.go:82] duration metric: took 169.456474ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020926   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217416   71627 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:55.217440   71627 pod_ready.go:82] duration metric: took 1.196506075s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217451   71627 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.036769   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:55.536544   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:52.544041   71183 main.go:141] libmachine: (embed-certs-836868) Waiting to get IP...
	I0910 18:59:52.545001   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.545522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.545586   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.545494   73202 retry.go:31] will retry after 260.451431ms: waiting for machine to come up
	I0910 18:59:52.807914   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.808351   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.808377   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.808307   73202 retry.go:31] will retry after 340.526757ms: waiting for machine to come up
	I0910 18:59:53.150854   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.151446   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.151476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.151404   73202 retry.go:31] will retry after 470.620322ms: waiting for machine to come up
	I0910 18:59:53.624169   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.624709   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.624747   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.624657   73202 retry.go:31] will retry after 529.186273ms: waiting for machine to come up
	I0910 18:59:54.155156   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.155644   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.155673   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.155599   73202 retry.go:31] will retry after 575.877001ms: waiting for machine to come up
	I0910 18:59:54.733522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.734049   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.734092   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.734000   73202 retry.go:31] will retry after 577.385946ms: waiting for machine to come up
	I0910 18:59:55.312705   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:55.313087   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:55.313114   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:55.313059   73202 retry.go:31] will retry after 735.788809ms: waiting for machine to come up
	I0910 18:59:56.049771   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:56.050272   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:56.050306   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:56.050224   73202 retry.go:31] will retry after 1.433431053s: waiting for machine to come up
	I0910 18:59:52.663465   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:52.666401   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.666796   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:52.666843   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.667002   72122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:52.672338   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:52.688427   72122 kubeadm.go:883] updating cluster {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:52.688559   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:59:52.688623   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:52.740370   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:52.740447   72122 ssh_runner.go:195] Run: which lz4
	I0910 18:59:52.744925   72122 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:52.749840   72122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:52.749872   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0910 18:59:54.437031   72122 crio.go:462] duration metric: took 1.692132914s to copy over tarball
	I0910 18:59:54.437124   72122 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:57.462705   72122 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025545297s)
	I0910 18:59:57.462743   72122 crio.go:469] duration metric: took 3.025690485s to extract the tarball
	I0910 18:59:57.462753   72122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:57.223959   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:59.224657   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:01.224783   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:58.035610   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:00.535779   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:57.485417   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:57.485870   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:57.485896   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:57.485815   73202 retry.go:31] will retry after 1.638565814s: waiting for machine to come up
	I0910 18:59:59.126134   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:59.126625   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:59.126657   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:59.126576   73202 retry.go:31] will retry after 2.127929201s: waiting for machine to come up
	I0910 19:00:01.256121   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:01.256665   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:01.256694   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:01.256612   73202 retry.go:31] will retry after 2.530100505s: waiting for machine to come up
	I0910 18:59:57.508817   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:57.551327   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:57.551350   72122 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:57.551434   72122 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.551704   72122 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.551776   72122 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.552000   72122 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.551807   72122 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.551846   72122 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.551714   72122 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0910 18:59:57.551917   72122 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.553642   72122 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.553660   72122 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.553917   72122 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.553935   72122 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 18:59:57.554014   72122 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.554160   72122 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.554376   72122 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.554662   72122 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.726191   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.742799   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.745264   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.753214   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.768122   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.770828   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 18:59:57.774835   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.807657   72122 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0910 18:59:57.807693   72122 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.807733   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908662   72122 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0910 18:59:57.908678   72122 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0910 18:59:57.908707   72122 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.908711   72122 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.908759   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908760   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920214   72122 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0910 18:59:57.920248   72122 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0910 18:59:57.920258   72122 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.920280   72122 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.920304   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920313   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.937914   72122 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0910 18:59:57.937952   72122 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.937958   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.937999   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.938033   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.938006   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.938073   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.938063   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.938157   72122 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0910 18:59:57.938185   72122 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0910 18:59:57.938215   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:58.044082   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.044139   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.044146   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.044173   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.045813   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.045816   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.045849   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.198804   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.198841   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.198881   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.198944   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.198978   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.199000   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.199081   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.353153   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0910 18:59:58.353217   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0910 18:59:58.353232   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0910 18:59:58.353277   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0910 18:59:58.359353   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.359363   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0910 18:59:58.359421   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.386872   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:58.407734   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0910 18:59:58.425479   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0910 18:59:58.553340   72122 cache_images.go:92] duration metric: took 1.001972084s to LoadCachedImages
	W0910 18:59:58.553438   72122 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0910 18:59:58.553455   72122 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0910 18:59:58.553634   72122 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-432422 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:58.553722   72122 ssh_runner.go:195] Run: crio config
	I0910 18:59:58.605518   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:59:58.605542   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:58.605554   72122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:58.605577   72122 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-432422 NodeName:old-k8s-version-432422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 18:59:58.605744   72122 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-432422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:58.605814   72122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0910 18:59:58.618033   72122 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:58.618096   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:58.629175   72122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0910 18:59:58.653830   72122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:58.679797   72122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0910 18:59:58.698692   72122 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:58.702565   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:58.715128   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:58.858262   72122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:58.876681   72122 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422 for IP: 192.168.61.51
	I0910 18:59:58.876719   72122 certs.go:194] generating shared ca certs ...
	I0910 18:59:58.876740   72122 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:58.876921   72122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:58.876983   72122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:58.876996   72122 certs.go:256] generating profile certs ...
	I0910 18:59:58.877129   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key
	I0910 18:59:58.877210   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b
	I0910 18:59:58.877264   72122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key
	I0910 18:59:58.877424   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:58.877473   72122 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:58.877491   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:58.877528   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:58.877560   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:58.877591   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:58.877648   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:58.878410   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:58.936013   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:58.969736   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:59.017414   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:59.063599   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 18:59:59.093934   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:59.138026   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:59.166507   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:59.196972   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:59.223596   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:59.250627   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:59.279886   72122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:59.300491   72122 ssh_runner.go:195] Run: openssl version
	I0910 18:59:59.306521   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:59.317238   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321625   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321682   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.327532   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:59.339028   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:59.350578   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355025   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355106   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.360701   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:59.375040   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:59.389867   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395829   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395890   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.402425   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:59.414077   72122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:59.418909   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:59.425061   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:59.431213   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:59.437581   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:59.443603   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:59.449820   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:59.456100   72122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:59.456189   72122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:59.456234   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.497167   72122 cri.go:89] found id: ""
	I0910 18:59:59.497227   72122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:59.508449   72122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:59.508474   72122 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:59.508527   72122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:59.521416   72122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:59.522489   72122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:59.523125   72122 kubeconfig.go:62] /home/jenkins/minikube-integration/19598-5973/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-432422" cluster setting kubeconfig missing "old-k8s-version-432422" context setting]
	I0910 18:59:59.524107   72122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:59.637793   72122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:59.651879   72122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0910 18:59:59.651916   72122 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:59.651930   72122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:59.651989   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.691857   72122 cri.go:89] found id: ""
	I0910 18:59:59.691922   72122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:59.708610   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:59.718680   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:59.718702   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:59.718755   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:59.729965   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:59.730028   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:59.740037   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:59.750640   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:59.750706   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:59.762436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.773456   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:59.773522   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.783438   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:59.792996   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:59.793056   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:59.805000   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:59.815384   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:59.955068   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:00.842403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.102530   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.212897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.340128   72122 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:01.340217   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:01.841004   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:02.340913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.225898   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.723882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.034295   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.034431   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.790275   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:03.790710   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:03.790736   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:03.790662   73202 retry.go:31] will retry after 3.202952028s: waiting for machine to come up
	I0910 19:00:06.995302   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:06.996124   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:06.996149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:06.996073   73202 retry.go:31] will retry after 3.076425277s: waiting for machine to come up
	I0910 19:00:02.840935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.340938   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.840669   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.341213   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.841274   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.340698   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.841152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.340425   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.841001   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.341198   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.724121   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.223744   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:07.533428   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:09.534830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.033655   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.075125   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075606   71183 main.go:141] libmachine: (embed-certs-836868) Found IP for machine: 192.168.39.107
	I0910 19:00:10.075634   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has current primary IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075643   71183 main.go:141] libmachine: (embed-certs-836868) Reserving static IP address...
	I0910 19:00:10.076046   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.076075   71183 main.go:141] libmachine: (embed-certs-836868) DBG | skip adding static IP to network mk-embed-certs-836868 - found existing host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"}
	I0910 19:00:10.076103   71183 main.go:141] libmachine: (embed-certs-836868) Reserved static IP address: 192.168.39.107
	I0910 19:00:10.076122   71183 main.go:141] libmachine: (embed-certs-836868) Waiting for SSH to be available...
	I0910 19:00:10.076133   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Getting to WaitForSSH function...
	I0910 19:00:10.078039   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078327   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.078352   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078452   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH client type: external
	I0910 19:00:10.078475   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa (-rw-------)
	I0910 19:00:10.078514   71183 main.go:141] libmachine: (embed-certs-836868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 19:00:10.078527   71183 main.go:141] libmachine: (embed-certs-836868) DBG | About to run SSH command:
	I0910 19:00:10.078548   71183 main.go:141] libmachine: (embed-certs-836868) DBG | exit 0
	I0910 19:00:10.201403   71183 main.go:141] libmachine: (embed-certs-836868) DBG | SSH cmd err, output: <nil>: 
	I0910 19:00:10.201748   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetConfigRaw
	I0910 19:00:10.202405   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.204760   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205130   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.205160   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205408   71183 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/config.json ...
	I0910 19:00:10.205697   71183 machine.go:93] provisionDockerMachine start ...
	I0910 19:00:10.205714   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.205924   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.208095   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208394   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.208418   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208534   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.208712   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208856   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208958   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.209193   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.209412   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.209427   71183 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:00:10.313247   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:00:10.313278   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313556   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 19:00:10.313584   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313765   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.316135   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316569   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.316592   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316739   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.316893   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317046   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317165   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.317288   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.317490   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.317506   71183 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-836868 && echo "embed-certs-836868" | sudo tee /etc/hostname
	I0910 19:00:10.433585   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-836868
	
	I0910 19:00:10.433608   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.436076   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436407   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.436440   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.436826   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.436972   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.437146   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.437314   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.437480   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.437495   71183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-836868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-836868/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-836868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:00:10.546105   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:00:10.546146   71183 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 19:00:10.546186   71183 buildroot.go:174] setting up certificates
	I0910 19:00:10.546197   71183 provision.go:84] configureAuth start
	I0910 19:00:10.546214   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.546485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.549236   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549567   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.549594   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549696   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.551807   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552162   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.552195   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552326   71183 provision.go:143] copyHostCerts
	I0910 19:00:10.552370   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 19:00:10.552380   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 19:00:10.552435   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 19:00:10.552559   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 19:00:10.552568   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 19:00:10.552588   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 19:00:10.552646   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 19:00:10.552653   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 19:00:10.552669   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 19:00:10.552714   71183 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.embed-certs-836868 san=[127.0.0.1 192.168.39.107 embed-certs-836868 localhost minikube]
	I0910 19:00:10.610073   71183 provision.go:177] copyRemoteCerts
	I0910 19:00:10.610132   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:00:10.610153   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.612881   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613264   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.613301   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.613695   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.613863   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.613980   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:10.695479   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 19:00:10.719380   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0910 19:00:10.744099   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:00:10.767849   71183 provision.go:87] duration metric: took 221.638443ms to configureAuth
	I0910 19:00:10.767873   71183 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:00:10.768065   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:10.768150   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.770831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.771178   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771338   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.771539   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771702   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771825   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.771952   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.772106   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.772120   71183 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 19:00:10.992528   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 19:00:10.992568   71183 machine.go:96] duration metric: took 786.857321ms to provisionDockerMachine
	I0910 19:00:10.992583   71183 start.go:293] postStartSetup for "embed-certs-836868" (driver="kvm2")
	I0910 19:00:10.992598   71183 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:00:10.992630   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.992999   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:00:10.993030   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.995361   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995745   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.995777   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995925   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.996100   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.996212   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.996375   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.079205   71183 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:00:11.083998   71183 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:00:11.084028   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 19:00:11.084089   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 19:00:11.084158   71183 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 19:00:11.084241   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:00:11.093150   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:11.116894   71183 start.go:296] duration metric: took 124.294668ms for postStartSetup
	I0910 19:00:11.116938   71183 fix.go:56] duration metric: took 19.934731446s for fixHost
	I0910 19:00:11.116962   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.119482   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119784   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.119821   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.120176   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120331   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120501   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.120645   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:11.120868   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:11.120883   71183 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:00:11.217542   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994811.172877822
	
	I0910 19:00:11.217570   71183 fix.go:216] guest clock: 1725994811.172877822
	I0910 19:00:11.217577   71183 fix.go:229] Guest: 2024-09-10 19:00:11.172877822 +0000 UTC Remote: 2024-09-10 19:00:11.116943488 +0000 UTC m=+358.948412200 (delta=55.934334ms)
	I0910 19:00:11.217603   71183 fix.go:200] guest clock delta is within tolerance: 55.934334ms
	I0910 19:00:11.217607   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 20.035440196s
	I0910 19:00:11.217627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.217861   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:11.220855   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221282   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.221313   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221533   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222074   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222277   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222354   71183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 19:00:11.222402   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.222528   71183 ssh_runner.go:195] Run: cat /version.json
	I0910 19:00:11.222570   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.225205   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.225565   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225581   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225753   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.225934   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226035   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.226062   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.226109   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226207   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.226283   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.226370   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226535   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226668   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.297642   71183 ssh_runner.go:195] Run: systemctl --version
	I0910 19:00:11.322486   71183 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 19:00:11.470402   71183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 19:00:11.477843   71183 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:00:11.477903   71183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:00:11.495518   71183 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:00:11.495542   71183 start.go:495] detecting cgroup driver to use...
	I0910 19:00:11.495597   71183 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:00:11.512467   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:00:11.526665   71183 docker.go:217] disabling cri-docker service (if available) ...
	I0910 19:00:11.526732   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 19:00:11.540445   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 19:00:11.554386   71183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 19:00:11.682012   71183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 19:00:11.846239   71183 docker.go:233] disabling docker service ...
	I0910 19:00:11.846303   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 19:00:11.860981   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 19:00:11.874271   71183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 19:00:12.005716   71183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 19:00:12.137151   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 19:00:12.151156   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:00:12.170086   71183 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 19:00:12.170150   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.180741   71183 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 19:00:12.180804   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.190933   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.200885   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:07.840772   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.341153   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.840737   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.340471   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.840262   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.340827   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.840645   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.340524   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.840521   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.340560   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.210950   71183 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:00:12.221730   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.232931   71183 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.251318   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.261473   71183 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:00:12.270818   71183 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 19:00:12.270873   71183 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 19:00:12.284581   71183 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:00:12.294214   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:12.424646   71183 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 19:00:12.517553   71183 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 19:00:12.517633   71183 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 19:00:12.522728   71183 start.go:563] Will wait 60s for crictl version
	I0910 19:00:12.522775   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:00:12.526754   71183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:00:12.569377   71183 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 19:00:12.569454   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.597783   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.632619   71183 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 19:00:12.725298   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:15.223906   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:14.035868   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:16.534058   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.633800   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:12.637104   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637447   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:12.637476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637684   71183 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 19:00:12.641996   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:12.654577   71183 kubeadm.go:883] updating cluster {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:00:12.654684   71183 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:00:12.654737   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:12.694585   71183 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 19:00:12.694644   71183 ssh_runner.go:195] Run: which lz4
	I0910 19:00:12.699764   71183 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 19:00:12.705406   71183 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:00:12.705437   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 19:00:14.054131   71183 crio.go:462] duration metric: took 1.354391682s to copy over tarball
	I0910 19:00:14.054206   71183 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 19:00:16.114941   71183 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.06070257s)
	I0910 19:00:16.114968   71183 crio.go:469] duration metric: took 2.060808083s to extract the tarball
	I0910 19:00:16.114978   71183 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 19:00:16.153934   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:16.199988   71183 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 19:00:16.200008   71183 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:00:16.200015   71183 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.0 crio true true} ...
	I0910 19:00:16.200109   71183 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-836868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:00:16.200168   71183 ssh_runner.go:195] Run: crio config
	I0910 19:00:16.249409   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:16.249430   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:16.249443   71183 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 19:00:16.249462   71183 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-836868 NodeName:embed-certs-836868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:00:16.249596   71183 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-836868"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:00:16.249652   71183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:00:16.265984   71183 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:00:16.266062   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:00:16.276007   71183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0910 19:00:16.291971   71183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:00:16.307712   71183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0910 19:00:16.323789   71183 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0910 19:00:16.327478   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:16.339545   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:16.470249   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:16.487798   71183 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868 for IP: 192.168.39.107
	I0910 19:00:16.487838   71183 certs.go:194] generating shared ca certs ...
	I0910 19:00:16.487858   71183 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:16.488058   71183 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 19:00:16.488110   71183 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 19:00:16.488124   71183 certs.go:256] generating profile certs ...
	I0910 19:00:16.488243   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/client.key
	I0910 19:00:16.488307   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key.04acd22a
	I0910 19:00:16.488355   71183 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key
	I0910 19:00:16.488507   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 19:00:16.488547   71183 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 19:00:16.488560   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 19:00:16.488593   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 19:00:16.488633   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 19:00:16.488669   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 19:00:16.488856   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:16.489528   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:00:16.529980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 19:00:16.568653   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:00:16.593924   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 19:00:16.628058   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0910 19:00:16.669209   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:00:16.693274   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:00:16.716323   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:00:16.740155   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:00:16.763908   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 19:00:16.787980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 19:00:16.811754   71183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:00:16.828151   71183 ssh_runner.go:195] Run: openssl version
	I0910 19:00:16.834095   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:00:16.845376   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850178   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850230   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.856507   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:00:16.868105   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 19:00:16.879950   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884778   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884823   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.890715   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 19:00:16.903523   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 19:00:16.914585   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919105   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919151   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.924965   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:00:16.935579   71183 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:00:16.939895   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:00:16.945595   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:00:16.951247   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:00:16.956938   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:00:16.962908   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:00:16.968664   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:00:16.974624   71183 kubeadm.go:392] StartCluster: {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:00:16.974725   71183 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 19:00:16.974778   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.012869   71183 cri.go:89] found id: ""
	I0910 19:00:17.012947   71183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:00:17.023781   71183 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 19:00:17.023798   71183 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 19:00:17.023846   71183 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 19:00:17.034549   71183 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:00:17.035566   71183 kubeconfig.go:125] found "embed-certs-836868" server: "https://192.168.39.107:8443"
	I0910 19:00:17.037751   71183 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:00:17.047667   71183 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.107
	I0910 19:00:17.047696   71183 kubeadm.go:1160] stopping kube-system containers ...
	I0910 19:00:17.047708   71183 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 19:00:17.047747   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.083130   71183 cri.go:89] found id: ""
	I0910 19:00:17.083200   71183 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 19:00:17.101035   71183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:00:17.111335   71183 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:00:17.111357   71183 kubeadm.go:157] found existing configuration files:
	
	I0910 19:00:17.111414   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:00:17.120543   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:00:17.120593   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:00:17.130938   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:00:17.140688   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:00:17.140747   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:00:17.150637   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.160483   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:00:17.160520   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.170417   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:00:17.179778   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:00:17.179827   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:00:17.189197   71183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:00:17.199264   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:12.841060   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.340347   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.841136   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.840913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.341205   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.840692   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.340839   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.841050   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.341340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.224985   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:19.231248   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:18.534658   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:20.534807   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:17.309791   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.257162   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.482216   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.555094   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.645089   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:18.645178   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.146266   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.645546   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.146275   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.645291   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.662158   71183 api_server.go:72] duration metric: took 2.017082575s to wait for apiserver process to appear ...
	I0910 19:00:20.662183   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:00:20.662204   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:17.840510   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.340821   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.841156   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.340316   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.840339   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.341140   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.841333   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.340342   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.840282   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:22.340361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.326005   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.326036   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.326048   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.346004   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.346035   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.662353   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.669314   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:23.669344   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.162975   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.170262   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:24.170298   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.662865   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.667320   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:00:24.674393   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:00:24.674418   71183 api_server.go:131] duration metric: took 4.01222766s to wait for apiserver health ...
	I0910 19:00:24.674427   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:24.674433   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:24.676229   71183 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:00:24.677519   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:00:24.692951   71183 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:00:24.718355   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:00:24.732731   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:00:24.732758   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:00:24.732764   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:00:24.732775   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:00:24.732781   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:00:24.732798   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 19:00:24.732808   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:00:24.732817   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:00:24.732823   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:00:24.732835   71183 system_pods.go:74] duration metric: took 14.459216ms to wait for pod list to return data ...
	I0910 19:00:24.732846   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:00:24.742472   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:00:24.742497   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:00:24.742507   71183 node_conditions.go:105] duration metric: took 9.657853ms to run NodePressure ...
	I0910 19:00:24.742523   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:25.021719   71183 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026163   71183 kubeadm.go:739] kubelet initialised
	I0910 19:00:25.026187   71183 kubeadm.go:740] duration metric: took 4.442058ms waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026196   71183 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:25.030895   71183 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.035021   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035044   71183 pod_ready.go:82] duration metric: took 4.12756ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.035055   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035064   71183 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.039362   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039381   71183 pod_ready.go:82] duration metric: took 4.309293ms for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.039389   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039394   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.049142   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049164   71183 pod_ready.go:82] duration metric: took 9.762471ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.049175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049182   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.122255   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122285   71183 pod_ready.go:82] duration metric: took 73.09407ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.122295   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122301   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.522122   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522160   71183 pod_ready.go:82] duration metric: took 399.850787ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.522175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522185   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.921918   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921947   71183 pod_ready.go:82] duration metric: took 399.75274ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.921956   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921962   71183 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:26.322195   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322219   71183 pod_ready.go:82] duration metric: took 400.248825ms for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:26.322228   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322235   71183 pod_ready.go:39] duration metric: took 1.296028669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:26.322251   71183 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:00:26.333796   71183 ops.go:34] apiserver oom_adj: -16
	I0910 19:00:26.333824   71183 kubeadm.go:597] duration metric: took 9.310018521s to restartPrimaryControlPlane
	I0910 19:00:26.333834   71183 kubeadm.go:394] duration metric: took 9.359219145s to StartCluster
	I0910 19:00:26.333850   71183 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.333920   71183 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:00:26.336496   71183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.336792   71183 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:00:26.336863   71183 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:00:26.336935   71183 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-836868"
	I0910 19:00:26.336969   71183 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-836868"
	W0910 19:00:26.336980   71183 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:00:26.336995   71183 addons.go:69] Setting default-storageclass=true in profile "embed-certs-836868"
	I0910 19:00:26.337050   71183 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-836868"
	I0910 19:00:26.337058   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:26.337050   71183 addons.go:69] Setting metrics-server=true in profile "embed-certs-836868"
	I0910 19:00:26.337011   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337146   71183 addons.go:234] Setting addon metrics-server=true in "embed-certs-836868"
	W0910 19:00:26.337165   71183 addons.go:243] addon metrics-server should already be in state true
	I0910 19:00:26.337234   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337501   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337547   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337552   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337583   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337638   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337677   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.339741   71183 out.go:177] * Verifying Kubernetes components...
	I0910 19:00:26.341792   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:26.354154   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0910 19:00:26.354750   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.355345   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.355379   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.355756   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.356316   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0910 19:00:26.356389   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.356428   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.356508   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35641
	I0910 19:00:26.356810   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.356893   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.357384   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.357411   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361164   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.361278   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.361302   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361363   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.361709   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.362446   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.362483   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.364762   71183 addons.go:234] Setting addon default-storageclass=true in "embed-certs-836868"
	W0910 19:00:26.364786   71183 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:00:26.364814   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.365165   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.365230   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.379158   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0910 19:00:26.379696   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.380235   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.380266   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.380654   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.380865   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.382030   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0910 19:00:26.382358   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.382892   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.382912   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.382928   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.383271   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.383441   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.385129   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.385171   71183 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:00:26.385687   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0910 19:00:26.386001   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.386217   71183 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:00:21.723833   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.724422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.724456   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.034262   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.035125   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:26.386227   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:00:26.386289   71183 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:00:26.386309   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.386518   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.386533   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.386931   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.387566   71183 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.387651   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:00:26.387672   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.387618   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.387760   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.389782   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.389941   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.390190   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.390263   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.390558   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.390744   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.390921   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.391058   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.391585   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391788   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.391941   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.392097   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.392256   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.404601   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0910 19:00:26.405167   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.406097   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.406655   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.407006   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.407163   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.409223   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.409437   71183 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.409454   71183 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:00:26.409470   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.412388   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.412812   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.412831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.413010   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.413177   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.413333   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.413474   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.533906   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:26.552203   71183 node_ready.go:35] waiting up to 6m0s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:26.687774   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:00:26.687804   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:00:26.690124   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.737647   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:00:26.737673   71183 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:00:26.739650   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.783096   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:26.783125   71183 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:00:26.828766   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:22.841048   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.341180   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.841325   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.340485   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.841340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.340935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.840886   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.340826   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.840344   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.341189   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.844896   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154733205s)
	I0910 19:00:27.844931   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105250764s)
	I0910 19:00:27.844944   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844969   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844979   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.844980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845406   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845420   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845434   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845446   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.845464   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.845471   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845702   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845733   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845747   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847084   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847101   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847110   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.847118   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.847308   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847323   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.852938   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.852956   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.853198   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.853219   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.853224   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.879527   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.05071539s)
	I0910 19:00:27.879577   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.879597   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880030   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880050   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880059   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.880081   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880381   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880405   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880416   71183 addons.go:475] Verifying addon metrics-server=true in "embed-certs-836868"
	I0910 19:00:27.880383   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.883034   71183 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:00:28.222881   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.223636   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.034633   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.884243   71183 addons.go:510] duration metric: took 1.547392632s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:00:28.556786   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:31.055519   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:27.840306   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.340657   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.841179   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.340881   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.840957   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.341260   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.841151   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.840360   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.341199   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.724435   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:35.223194   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.533611   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:34.534941   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.034007   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:33.056381   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:34.056156   71183 node_ready.go:49] node "embed-certs-836868" has status "Ready":"True"
	I0910 19:00:34.056191   71183 node_ready.go:38] duration metric: took 7.503955102s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:34.056200   71183 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:34.063331   71183 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068294   71183 pod_ready.go:93] pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:34.068322   71183 pod_ready.go:82] duration metric: took 4.96275ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068335   71183 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:36.077798   71183 pod_ready.go:103] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.841192   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.340518   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.840995   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.341016   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.840480   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.340647   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.840416   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.340921   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.340956   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.224065   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.723852   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.533725   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.534430   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.576189   71183 pod_ready.go:93] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.576218   71183 pod_ready.go:82] duration metric: took 3.507872898s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.576238   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582150   71183 pod_ready.go:93] pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.582167   71183 pod_ready.go:82] duration metric: took 5.921544ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582175   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586941   71183 pod_ready.go:93] pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.586956   71183 pod_ready.go:82] duration metric: took 4.774648ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586963   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591829   71183 pod_ready.go:93] pod "kube-proxy-4fddv" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.591846   71183 pod_ready.go:82] duration metric: took 4.876938ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591854   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657930   71183 pod_ready.go:93] pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.657952   71183 pod_ready.go:82] duration metric: took 66.092785ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657962   71183 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:39.665465   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.841210   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.341302   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.340558   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.840395   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.341022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.841093   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.341228   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.841103   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.340329   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.223446   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.223533   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.224840   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.033565   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.034402   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.164336   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.164983   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:42.841000   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.341147   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.840534   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.340988   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.340859   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.840877   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.841175   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:47.341064   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.722930   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.723539   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.036816   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.534367   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.667433   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:51.164114   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:47.841037   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.341204   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.840961   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.340679   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.841173   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.340751   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.841158   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.340999   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.840349   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.340383   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.723945   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.224168   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.034234   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.533690   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.164294   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.666369   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:52.840991   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.340439   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.840487   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.340407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.840619   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.340844   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.841190   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.340927   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.724247   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.223715   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:58.033639   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.034297   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.670234   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.164278   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.164755   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.840798   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.340905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.841330   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.340743   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.840256   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.340970   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.840732   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:01.340927   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:01.341014   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:01.378922   72122 cri.go:89] found id: ""
	I0910 19:01:01.378953   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.378964   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:01.378971   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:01.379032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:01.413274   72122 cri.go:89] found id: ""
	I0910 19:01:01.413302   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.413313   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:01.413320   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:01.413383   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:01.449165   72122 cri.go:89] found id: ""
	I0910 19:01:01.449204   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.449215   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:01.449221   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:01.449291   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:01.484627   72122 cri.go:89] found id: ""
	I0910 19:01:01.484650   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.484657   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:01.484663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:01.484720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:01.519332   72122 cri.go:89] found id: ""
	I0910 19:01:01.519357   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.519364   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:01.519370   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:01.519424   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:01.554080   72122 cri.go:89] found id: ""
	I0910 19:01:01.554102   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.554109   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:01.554114   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:01.554160   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:01.590100   72122 cri.go:89] found id: ""
	I0910 19:01:01.590131   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.590143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:01.590149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:01.590208   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:01.623007   72122 cri.go:89] found id: ""
	I0910 19:01:01.623034   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.623045   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:01.623055   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:01.623070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:01.679940   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:01.679971   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:01.694183   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:01.694218   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:01.826997   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:01.827025   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:01.827038   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:01.903885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:01.903926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:02.224039   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.224422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.533395   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:05.034075   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.665680   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:06.665874   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.450792   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:04.471427   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:04.471501   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:04.521450   72122 cri.go:89] found id: ""
	I0910 19:01:04.521484   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.521494   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:04.521503   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:04.521562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:04.577588   72122 cri.go:89] found id: ""
	I0910 19:01:04.577622   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.577633   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:04.577641   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:04.577707   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:04.615558   72122 cri.go:89] found id: ""
	I0910 19:01:04.615586   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.615594   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:04.615599   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:04.615652   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:04.655763   72122 cri.go:89] found id: ""
	I0910 19:01:04.655793   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.655806   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:04.655815   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:04.655881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:04.692620   72122 cri.go:89] found id: ""
	I0910 19:01:04.692642   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.692649   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:04.692654   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:04.692709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:04.730575   72122 cri.go:89] found id: ""
	I0910 19:01:04.730601   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.730611   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:04.730616   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:04.730665   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:04.766716   72122 cri.go:89] found id: ""
	I0910 19:01:04.766742   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.766749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:04.766754   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:04.766799   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:04.808122   72122 cri.go:89] found id: ""
	I0910 19:01:04.808151   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.808162   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:04.808173   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:04.808185   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:04.858563   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:04.858592   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:04.872323   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:04.872350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:04.942541   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:04.942571   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:04.942588   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:05.022303   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:05.022338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:06.723760   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:08.724550   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.223094   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.533060   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.534466   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:12.034244   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.163526   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.164502   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.562092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:07.575254   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:07.575308   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:07.616583   72122 cri.go:89] found id: ""
	I0910 19:01:07.616607   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.616615   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:07.616620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:07.616676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:07.654676   72122 cri.go:89] found id: ""
	I0910 19:01:07.654700   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.654711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:07.654718   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:07.654790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:07.690054   72122 cri.go:89] found id: ""
	I0910 19:01:07.690085   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.690096   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:07.690104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:07.690171   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:07.724273   72122 cri.go:89] found id: ""
	I0910 19:01:07.724295   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.724302   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:07.724307   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:07.724363   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:07.757621   72122 cri.go:89] found id: ""
	I0910 19:01:07.757646   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.757654   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:07.757660   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:07.757716   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:07.791502   72122 cri.go:89] found id: ""
	I0910 19:01:07.791533   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.791543   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:07.791557   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:07.791620   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:07.825542   72122 cri.go:89] found id: ""
	I0910 19:01:07.825577   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.825586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:07.825592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:07.825649   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:07.862278   72122 cri.go:89] found id: ""
	I0910 19:01:07.862303   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.862312   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:07.862320   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:07.862331   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:07.952016   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:07.952059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:07.997004   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:07.997034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:08.047745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:08.047783   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:08.064712   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:08.064736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:08.136822   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:10.637017   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:10.650113   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:10.650198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:10.687477   72122 cri.go:89] found id: ""
	I0910 19:01:10.687504   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.687513   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:10.687520   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:10.687594   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:10.721410   72122 cri.go:89] found id: ""
	I0910 19:01:10.721437   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.721447   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:10.721455   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:10.721514   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:10.757303   72122 cri.go:89] found id: ""
	I0910 19:01:10.757330   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.757338   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:10.757343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:10.757396   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:10.794761   72122 cri.go:89] found id: ""
	I0910 19:01:10.794788   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.794799   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:10.794806   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:10.794885   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:10.828631   72122 cri.go:89] found id: ""
	I0910 19:01:10.828657   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.828668   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:10.828675   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:10.828737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:10.863609   72122 cri.go:89] found id: ""
	I0910 19:01:10.863634   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.863641   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:10.863646   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:10.863734   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:10.899299   72122 cri.go:89] found id: ""
	I0910 19:01:10.899324   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.899335   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:10.899342   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:10.899403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:10.939233   72122 cri.go:89] found id: ""
	I0910 19:01:10.939259   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.939268   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:10.939277   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:10.939290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:10.976599   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:10.976627   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:11.029099   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:11.029144   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:11.045401   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:11.045426   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:11.119658   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:11.119679   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:11.119696   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:13.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.723673   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:14.034325   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:16.534463   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.663847   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.664387   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.698696   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:13.712317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:13.712386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:13.747442   72122 cri.go:89] found id: ""
	I0910 19:01:13.747470   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.747480   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:13.747487   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:13.747555   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:13.782984   72122 cri.go:89] found id: ""
	I0910 19:01:13.783008   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.783015   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:13.783021   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:13.783078   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:13.820221   72122 cri.go:89] found id: ""
	I0910 19:01:13.820245   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.820256   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:13.820262   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:13.820322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:13.854021   72122 cri.go:89] found id: ""
	I0910 19:01:13.854056   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.854068   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:13.854075   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:13.854138   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:13.888292   72122 cri.go:89] found id: ""
	I0910 19:01:13.888321   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.888331   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:13.888338   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:13.888398   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:13.922301   72122 cri.go:89] found id: ""
	I0910 19:01:13.922330   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.922341   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:13.922349   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:13.922408   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:13.959977   72122 cri.go:89] found id: ""
	I0910 19:01:13.960002   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.960010   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:13.960015   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:13.960074   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:13.995255   72122 cri.go:89] found id: ""
	I0910 19:01:13.995282   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.995293   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:13.995308   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:13.995323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:14.050760   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:14.050790   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:14.064694   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:14.064723   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:14.137406   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:14.137431   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:14.137447   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:14.216624   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:14.216657   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:16.765643   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:16.778746   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:16.778821   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:16.814967   72122 cri.go:89] found id: ""
	I0910 19:01:16.814999   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.815010   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:16.815017   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:16.815073   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:16.850306   72122 cri.go:89] found id: ""
	I0910 19:01:16.850334   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.850345   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:16.850352   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:16.850413   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:16.886104   72122 cri.go:89] found id: ""
	I0910 19:01:16.886134   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.886144   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:16.886152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:16.886218   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:16.921940   72122 cri.go:89] found id: ""
	I0910 19:01:16.921968   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.921977   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:16.921983   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:16.922032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:16.956132   72122 cri.go:89] found id: ""
	I0910 19:01:16.956166   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.956177   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:16.956185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:16.956247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:16.988240   72122 cri.go:89] found id: ""
	I0910 19:01:16.988269   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.988278   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:16.988284   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:16.988330   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:17.022252   72122 cri.go:89] found id: ""
	I0910 19:01:17.022281   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.022291   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:17.022297   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:17.022364   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:17.058664   72122 cri.go:89] found id: ""
	I0910 19:01:17.058693   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.058703   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:17.058715   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:17.058740   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:17.136927   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:17.136964   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:17.189427   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:17.189457   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:17.242193   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:17.242225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:17.257878   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:17.257908   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:17.330096   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:17.724465   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.224230   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:18.534806   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:21.034368   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:17.667897   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.165174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.165421   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:19.831030   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:19.844516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:19.844581   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:19.879878   72122 cri.go:89] found id: ""
	I0910 19:01:19.879908   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.879919   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:19.879927   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:19.879988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:19.915992   72122 cri.go:89] found id: ""
	I0910 19:01:19.916018   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.916025   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:19.916030   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:19.916084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:19.949206   72122 cri.go:89] found id: ""
	I0910 19:01:19.949232   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.949242   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:19.949249   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:19.949311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:19.983011   72122 cri.go:89] found id: ""
	I0910 19:01:19.983035   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.983043   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:19.983048   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:19.983096   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:20.018372   72122 cri.go:89] found id: ""
	I0910 19:01:20.018394   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.018402   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:20.018408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:20.018466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:20.053941   72122 cri.go:89] found id: ""
	I0910 19:01:20.053967   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.053975   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:20.053980   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:20.054037   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:20.084999   72122 cri.go:89] found id: ""
	I0910 19:01:20.085026   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.085035   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:20.085042   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:20.085115   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:20.124036   72122 cri.go:89] found id: ""
	I0910 19:01:20.124063   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.124072   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:20.124086   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:20.124103   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:20.176917   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:20.176944   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:20.190831   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:20.190852   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:20.257921   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:20.257942   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:20.257954   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:20.335320   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:20.335350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:22.723788   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.223765   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:23.034456   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.534821   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:24.663208   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:26.664282   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.875167   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:22.888803   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:22.888858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:22.922224   72122 cri.go:89] found id: ""
	I0910 19:01:22.922252   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.922264   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:22.922270   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:22.922328   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:22.959502   72122 cri.go:89] found id: ""
	I0910 19:01:22.959536   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.959546   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:22.959553   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:22.959619   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:22.992914   72122 cri.go:89] found id: ""
	I0910 19:01:22.992944   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.992955   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:22.992962   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:22.993022   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:23.028342   72122 cri.go:89] found id: ""
	I0910 19:01:23.028367   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.028376   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:23.028384   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:23.028443   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:23.064715   72122 cri.go:89] found id: ""
	I0910 19:01:23.064742   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.064753   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:23.064761   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:23.064819   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:23.100752   72122 cri.go:89] found id: ""
	I0910 19:01:23.100781   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.100789   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:23.100795   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:23.100857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:23.136017   72122 cri.go:89] found id: ""
	I0910 19:01:23.136045   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.136055   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:23.136062   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:23.136108   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:23.170787   72122 cri.go:89] found id: ""
	I0910 19:01:23.170811   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.170819   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:23.170826   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:23.170840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:23.210031   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:23.210059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:23.261525   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:23.261557   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:23.275611   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:23.275636   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:23.348543   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:23.348568   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:23.348582   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:25.929406   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:25.942658   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:25.942737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:25.977231   72122 cri.go:89] found id: ""
	I0910 19:01:25.977260   72122 logs.go:276] 0 containers: []
	W0910 19:01:25.977270   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:25.977277   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:25.977336   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:26.015060   72122 cri.go:89] found id: ""
	I0910 19:01:26.015093   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.015103   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:26.015110   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:26.015180   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:26.053618   72122 cri.go:89] found id: ""
	I0910 19:01:26.053643   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.053651   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:26.053656   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:26.053712   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:26.090489   72122 cri.go:89] found id: ""
	I0910 19:01:26.090515   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.090523   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:26.090529   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:26.090587   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:26.126687   72122 cri.go:89] found id: ""
	I0910 19:01:26.126710   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.126718   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:26.126723   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:26.126771   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:26.160901   72122 cri.go:89] found id: ""
	I0910 19:01:26.160939   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.160951   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:26.160959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:26.161017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:26.195703   72122 cri.go:89] found id: ""
	I0910 19:01:26.195728   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.195737   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:26.195743   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:26.195794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:26.230394   72122 cri.go:89] found id: ""
	I0910 19:01:26.230414   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.230422   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:26.230430   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:26.230444   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:26.296884   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:26.296905   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:26.296926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:26.371536   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:26.371569   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:26.412926   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:26.412958   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:26.462521   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:26.462550   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:27.725957   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.224312   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.034338   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.034794   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.035284   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.668205   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:31.166271   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.976550   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:28.989517   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:28.989586   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:29.025638   72122 cri.go:89] found id: ""
	I0910 19:01:29.025662   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.025671   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:29.025677   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:29.025719   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:29.067473   72122 cri.go:89] found id: ""
	I0910 19:01:29.067495   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.067502   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:29.067507   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:29.067556   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:29.105587   72122 cri.go:89] found id: ""
	I0910 19:01:29.105616   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.105628   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:29.105635   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:29.105696   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:29.142427   72122 cri.go:89] found id: ""
	I0910 19:01:29.142458   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.142468   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:29.142474   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:29.142530   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:29.178553   72122 cri.go:89] found id: ""
	I0910 19:01:29.178575   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.178582   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:29.178587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:29.178638   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:29.212997   72122 cri.go:89] found id: ""
	I0910 19:01:29.213025   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.213034   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:29.213040   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:29.213109   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:29.247057   72122 cri.go:89] found id: ""
	I0910 19:01:29.247083   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.247091   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:29.247097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:29.247151   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:29.285042   72122 cri.go:89] found id: ""
	I0910 19:01:29.285084   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.285096   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:29.285107   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:29.285131   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:29.336003   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:29.336033   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:29.349867   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:29.349890   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:29.422006   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:29.422028   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:29.422043   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:29.504047   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:29.504079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:32.050723   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:32.063851   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:32.063904   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:32.100816   72122 cri.go:89] found id: ""
	I0910 19:01:32.100841   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.100851   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:32.100858   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:32.100924   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:32.134863   72122 cri.go:89] found id: ""
	I0910 19:01:32.134892   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.134902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:32.134909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:32.134967   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:32.169873   72122 cri.go:89] found id: ""
	I0910 19:01:32.169901   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.169912   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:32.169919   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:32.169973   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:32.202161   72122 cri.go:89] found id: ""
	I0910 19:01:32.202187   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.202197   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:32.202204   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:32.202264   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:32.236850   72122 cri.go:89] found id: ""
	I0910 19:01:32.236879   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.236888   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:32.236896   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:32.236957   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:32.271479   72122 cri.go:89] found id: ""
	I0910 19:01:32.271511   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.271530   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:32.271542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:32.271701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:32.306724   72122 cri.go:89] found id: ""
	I0910 19:01:32.306747   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.306754   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:32.306760   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:32.306811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:32.341153   72122 cri.go:89] found id: ""
	I0910 19:01:32.341184   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.341195   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:32.341206   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:32.341221   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:32.393087   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:32.393122   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:32.406565   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:32.406591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:32.478030   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:32.478048   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:32.478079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:32.224371   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.723372   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.533510   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:37.033933   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:33.671725   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:36.165396   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.568440   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:32.568478   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:35.112022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:35.125210   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:35.125286   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:35.160716   72122 cri.go:89] found id: ""
	I0910 19:01:35.160743   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.160753   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:35.160759   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:35.160817   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:35.196500   72122 cri.go:89] found id: ""
	I0910 19:01:35.196530   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.196541   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:35.196548   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:35.196622   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:35.232476   72122 cri.go:89] found id: ""
	I0910 19:01:35.232510   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.232521   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:35.232528   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:35.232590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:35.269612   72122 cri.go:89] found id: ""
	I0910 19:01:35.269635   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.269644   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:35.269649   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:35.269697   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:35.307368   72122 cri.go:89] found id: ""
	I0910 19:01:35.307393   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.307401   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:35.307408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:35.307475   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:35.342079   72122 cri.go:89] found id: ""
	I0910 19:01:35.342108   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.342119   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:35.342126   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:35.342188   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:35.379732   72122 cri.go:89] found id: ""
	I0910 19:01:35.379761   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.379771   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:35.379778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:35.379840   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:35.419067   72122 cri.go:89] found id: ""
	I0910 19:01:35.419098   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.419109   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:35.419120   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:35.419139   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:35.472459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:35.472494   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:35.487044   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:35.487078   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:35.565242   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:35.565264   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:35.565282   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:35.645918   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:35.645951   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:36.724330   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.724368   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.224272   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:39.533968   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.534579   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.666059   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.164158   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.189238   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:38.203973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:38.204035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:38.241402   72122 cri.go:89] found id: ""
	I0910 19:01:38.241428   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.241438   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:38.241446   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:38.241506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:38.280657   72122 cri.go:89] found id: ""
	I0910 19:01:38.280685   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.280693   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:38.280698   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:38.280753   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:38.319697   72122 cri.go:89] found id: ""
	I0910 19:01:38.319725   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.319735   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:38.319742   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:38.319804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:38.356766   72122 cri.go:89] found id: ""
	I0910 19:01:38.356799   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.356810   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:38.356817   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:38.356876   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:38.395468   72122 cri.go:89] found id: ""
	I0910 19:01:38.395497   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.395508   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:38.395516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:38.395577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:38.434942   72122 cri.go:89] found id: ""
	I0910 19:01:38.434965   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.434974   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:38.434979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:38.435025   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:38.470687   72122 cri.go:89] found id: ""
	I0910 19:01:38.470715   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.470724   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:38.470729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:38.470777   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:38.505363   72122 cri.go:89] found id: ""
	I0910 19:01:38.505394   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.505405   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:38.505417   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:38.505432   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:38.557735   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:38.557770   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:38.586094   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:38.586128   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:38.665190   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:38.665215   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:38.665231   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:38.743748   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:38.743779   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:41.284310   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:41.299086   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:41.299157   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:41.340453   72122 cri.go:89] found id: ""
	I0910 19:01:41.340476   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.340484   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:41.340489   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:41.340544   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:41.374028   72122 cri.go:89] found id: ""
	I0910 19:01:41.374052   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.374060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:41.374066   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:41.374117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:41.413888   72122 cri.go:89] found id: ""
	I0910 19:01:41.413915   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.413929   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:41.413935   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:41.413994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:41.450846   72122 cri.go:89] found id: ""
	I0910 19:01:41.450873   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.450883   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:41.450890   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:41.450950   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:41.484080   72122 cri.go:89] found id: ""
	I0910 19:01:41.484107   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.484115   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:41.484120   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:41.484168   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:41.523652   72122 cri.go:89] found id: ""
	I0910 19:01:41.523677   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.523685   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:41.523690   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:41.523749   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:41.563690   72122 cri.go:89] found id: ""
	I0910 19:01:41.563715   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.563727   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:41.563734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:41.563797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:41.602101   72122 cri.go:89] found id: ""
	I0910 19:01:41.602122   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.602130   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:41.602137   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:41.602152   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:41.655459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:41.655488   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:41.670037   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:41.670062   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:41.741399   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:41.741417   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:41.741428   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:41.817411   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:41.817445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:43.726285   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.223867   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.034404   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.533246   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:43.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.164675   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.363631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:44.378279   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:44.378344   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:44.412450   72122 cri.go:89] found id: ""
	I0910 19:01:44.412486   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.412495   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:44.412502   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:44.412569   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:44.448378   72122 cri.go:89] found id: ""
	I0910 19:01:44.448407   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.448415   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:44.448420   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:44.448470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:44.483478   72122 cri.go:89] found id: ""
	I0910 19:01:44.483516   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.483524   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:44.483530   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:44.483584   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:44.517787   72122 cri.go:89] found id: ""
	I0910 19:01:44.517812   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.517822   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:44.517828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:44.517886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:44.554909   72122 cri.go:89] found id: ""
	I0910 19:01:44.554939   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.554950   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:44.554957   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:44.555018   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:44.589865   72122 cri.go:89] found id: ""
	I0910 19:01:44.589890   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.589909   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:44.589923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:44.589968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:44.626712   72122 cri.go:89] found id: ""
	I0910 19:01:44.626739   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.626749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:44.626756   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:44.626815   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:44.664985   72122 cri.go:89] found id: ""
	I0910 19:01:44.665067   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.665103   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:44.665114   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:44.665165   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:44.721160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:44.721196   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:44.735339   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:44.735366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:44.810056   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:44.810080   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:44.810094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:44.898822   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:44.898871   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:47.438440   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:47.451438   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:47.451506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:48.723661   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.723768   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.534671   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:51.033397   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.164739   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.665165   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:47.491703   72122 cri.go:89] found id: ""
	I0910 19:01:47.491729   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.491740   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:47.491747   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:47.491811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:47.526834   72122 cri.go:89] found id: ""
	I0910 19:01:47.526862   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.526874   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:47.526880   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:47.526940   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:47.570463   72122 cri.go:89] found id: ""
	I0910 19:01:47.570488   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.570496   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:47.570503   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:47.570545   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:47.608691   72122 cri.go:89] found id: ""
	I0910 19:01:47.608715   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.608727   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:47.608734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:47.608780   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:47.648279   72122 cri.go:89] found id: ""
	I0910 19:01:47.648308   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.648316   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:47.648324   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:47.648386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:47.684861   72122 cri.go:89] found id: ""
	I0910 19:01:47.684885   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.684892   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:47.684897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:47.684947   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:47.721004   72122 cri.go:89] found id: ""
	I0910 19:01:47.721037   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.721049   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:47.721056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:47.721134   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:47.756154   72122 cri.go:89] found id: ""
	I0910 19:01:47.756181   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.756192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:47.756202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:47.756217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:47.806860   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:47.806889   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:47.822419   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:47.822445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:47.891966   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:47.891986   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:47.892000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:47.978510   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:47.978561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.519264   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:50.533576   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:50.533630   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:50.567574   72122 cri.go:89] found id: ""
	I0910 19:01:50.567601   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.567612   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:50.567619   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:50.567678   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:50.608824   72122 cri.go:89] found id: ""
	I0910 19:01:50.608850   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.608858   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:50.608863   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:50.608939   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:50.644502   72122 cri.go:89] found id: ""
	I0910 19:01:50.644530   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.644538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:50.644544   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:50.644590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:50.682309   72122 cri.go:89] found id: ""
	I0910 19:01:50.682332   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.682340   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:50.682345   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:50.682404   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:50.735372   72122 cri.go:89] found id: ""
	I0910 19:01:50.735398   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.735410   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:50.735418   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:50.735482   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:50.786364   72122 cri.go:89] found id: ""
	I0910 19:01:50.786391   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.786401   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:50.786408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:50.786464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:50.831525   72122 cri.go:89] found id: ""
	I0910 19:01:50.831564   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.831575   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:50.831582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:50.831645   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:50.873457   72122 cri.go:89] found id: ""
	I0910 19:01:50.873482   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.873493   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:50.873503   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:50.873524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:50.956032   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:50.956069   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.996871   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:50.996904   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:51.047799   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:51.047824   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:51.061946   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:51.061970   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:51.136302   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:53.222492   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.223835   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.034478   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.532623   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:52.665991   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.164343   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.636464   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:53.649971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:53.650054   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:53.688172   72122 cri.go:89] found id: ""
	I0910 19:01:53.688201   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.688211   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:53.688217   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:53.688274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:53.725094   72122 cri.go:89] found id: ""
	I0910 19:01:53.725119   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.725128   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:53.725135   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:53.725196   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:53.763866   72122 cri.go:89] found id: ""
	I0910 19:01:53.763893   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.763907   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:53.763914   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:53.763971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:53.797760   72122 cri.go:89] found id: ""
	I0910 19:01:53.797787   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.797798   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:53.797805   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:53.797862   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:53.830305   72122 cri.go:89] found id: ""
	I0910 19:01:53.830332   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.830340   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:53.830346   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:53.830402   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:53.861970   72122 cri.go:89] found id: ""
	I0910 19:01:53.861995   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.862003   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:53.862009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:53.862059   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:53.896577   72122 cri.go:89] found id: ""
	I0910 19:01:53.896600   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.896609   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:53.896614   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:53.896660   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:53.935051   72122 cri.go:89] found id: ""
	I0910 19:01:53.935077   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.935086   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:53.935094   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:53.935105   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:53.950252   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:53.950276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:54.023327   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:54.023346   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:54.023361   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:54.101605   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:54.101643   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:54.142906   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:54.142930   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:56.697701   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:56.717755   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:56.717836   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:56.763564   72122 cri.go:89] found id: ""
	I0910 19:01:56.763594   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.763606   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:56.763613   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:56.763675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:56.815780   72122 cri.go:89] found id: ""
	I0910 19:01:56.815808   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.815816   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:56.815821   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:56.815883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:56.848983   72122 cri.go:89] found id: ""
	I0910 19:01:56.849013   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.849024   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:56.849032   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:56.849100   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:56.880660   72122 cri.go:89] found id: ""
	I0910 19:01:56.880690   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.880702   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:56.880709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:56.880756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:56.922836   72122 cri.go:89] found id: ""
	I0910 19:01:56.922860   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.922867   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:56.922873   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:56.922938   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:56.963474   72122 cri.go:89] found id: ""
	I0910 19:01:56.963505   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.963517   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:56.963524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:56.963585   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:56.996837   72122 cri.go:89] found id: ""
	I0910 19:01:56.996864   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.996872   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:56.996877   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:56.996925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:57.029594   72122 cri.go:89] found id: ""
	I0910 19:01:57.029629   72122 logs.go:276] 0 containers: []
	W0910 19:01:57.029640   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:57.029651   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:57.029664   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:57.083745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:57.083772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:57.099269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:57.099293   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:57.174098   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:57.174118   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:57.174129   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:57.258833   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:57.258869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:57.224384   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.722547   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.533178   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.533798   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.035089   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.665383   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:00.164920   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.800644   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:59.814728   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:59.814805   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:59.854081   72122 cri.go:89] found id: ""
	I0910 19:01:59.854113   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.854124   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:59.854133   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:59.854197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:59.889524   72122 cri.go:89] found id: ""
	I0910 19:01:59.889550   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.889560   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:59.889567   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:59.889626   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:59.925833   72122 cri.go:89] found id: ""
	I0910 19:01:59.925859   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.925866   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:59.925872   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:59.925935   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:59.962538   72122 cri.go:89] found id: ""
	I0910 19:01:59.962575   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.962586   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:59.962593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:59.962650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:59.996994   72122 cri.go:89] found id: ""
	I0910 19:01:59.997025   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.997037   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:59.997045   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:59.997126   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:00.032881   72122 cri.go:89] found id: ""
	I0910 19:02:00.032905   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.032915   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:00.032923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:00.032988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:00.065838   72122 cri.go:89] found id: ""
	I0910 19:02:00.065861   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.065869   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:00.065874   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:00.065927   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:00.099479   72122 cri.go:89] found id: ""
	I0910 19:02:00.099505   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.099516   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:00.099526   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:00.099540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:00.182661   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:00.182689   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:00.223514   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:00.223553   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:00.273695   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:00.273721   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:00.287207   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:00.287233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:00.353975   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:01.724647   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.224071   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.225475   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.534230   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.534474   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.665228   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.667935   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:07.163506   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.854145   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:02.867413   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:02.867484   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:02.904299   72122 cri.go:89] found id: ""
	I0910 19:02:02.904327   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.904335   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:02.904340   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:02.904392   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:02.940981   72122 cri.go:89] found id: ""
	I0910 19:02:02.941010   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.941019   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:02.941024   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:02.941099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:02.980013   72122 cri.go:89] found id: ""
	I0910 19:02:02.980038   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.980046   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:02.980052   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:02.980111   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:03.020041   72122 cri.go:89] found id: ""
	I0910 19:02:03.020071   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.020080   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:03.020087   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:03.020144   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:03.055228   72122 cri.go:89] found id: ""
	I0910 19:02:03.055264   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.055277   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:03.055285   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:03.055347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:03.088696   72122 cri.go:89] found id: ""
	I0910 19:02:03.088722   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.088730   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:03.088736   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:03.088787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:03.124753   72122 cri.go:89] found id: ""
	I0910 19:02:03.124776   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.124785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:03.124792   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:03.124849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:03.157191   72122 cri.go:89] found id: ""
	I0910 19:02:03.157222   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.157230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:03.157238   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:03.157248   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:03.239015   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:03.239044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:03.279323   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:03.279355   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:03.328034   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:03.328067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:03.341591   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:03.341620   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:03.411057   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:05.911503   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:05.924794   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:05.924868   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:05.958827   72122 cri.go:89] found id: ""
	I0910 19:02:05.958852   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.958859   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:05.958865   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:05.958920   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:05.992376   72122 cri.go:89] found id: ""
	I0910 19:02:05.992412   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.992423   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:05.992429   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:05.992485   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:06.028058   72122 cri.go:89] found id: ""
	I0910 19:02:06.028088   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.028098   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:06.028107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:06.028162   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:06.066428   72122 cri.go:89] found id: ""
	I0910 19:02:06.066458   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.066470   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:06.066477   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:06.066533   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:06.102750   72122 cri.go:89] found id: ""
	I0910 19:02:06.102774   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.102782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:06.102787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:06.102841   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:06.137216   72122 cri.go:89] found id: ""
	I0910 19:02:06.137243   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.137254   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:06.137261   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:06.137323   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:06.175227   72122 cri.go:89] found id: ""
	I0910 19:02:06.175251   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.175259   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:06.175265   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:06.175311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:06.210197   72122 cri.go:89] found id: ""
	I0910 19:02:06.210222   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.210230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:06.210238   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:06.210249   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:06.261317   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:06.261353   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:06.275196   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:06.275225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:06.354186   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:06.354205   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:06.354219   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:06.436726   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:06.436763   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:08.723505   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:10.724499   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.035939   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.534648   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.166629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.666941   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:08.979157   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:08.992097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:08.992156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:09.025260   72122 cri.go:89] found id: ""
	I0910 19:02:09.025282   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.025289   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:09.025295   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:09.025360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:09.059139   72122 cri.go:89] found id: ""
	I0910 19:02:09.059166   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.059177   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:09.059186   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:09.059240   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:09.092935   72122 cri.go:89] found id: ""
	I0910 19:02:09.092964   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.092973   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:09.092979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:09.093027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:09.127273   72122 cri.go:89] found id: ""
	I0910 19:02:09.127299   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.127310   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:09.127317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:09.127367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:09.163353   72122 cri.go:89] found id: ""
	I0910 19:02:09.163380   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.163389   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:09.163396   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:09.163453   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:09.195371   72122 cri.go:89] found id: ""
	I0910 19:02:09.195396   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.195407   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:09.195414   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:09.195473   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:09.229338   72122 cri.go:89] found id: ""
	I0910 19:02:09.229361   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.229370   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:09.229376   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:09.229432   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:09.262822   72122 cri.go:89] found id: ""
	I0910 19:02:09.262847   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.262857   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:09.262874   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:09.262891   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:09.330079   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:09.330103   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:09.330119   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:09.408969   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:09.409003   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:09.447666   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:09.447702   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:09.501111   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:09.501141   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.016407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:12.030822   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:12.030905   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:12.069191   72122 cri.go:89] found id: ""
	I0910 19:02:12.069218   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.069229   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:12.069236   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:12.069306   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:12.103687   72122 cri.go:89] found id: ""
	I0910 19:02:12.103726   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.103737   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:12.103862   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:12.103937   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:12.142891   72122 cri.go:89] found id: ""
	I0910 19:02:12.142920   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.142932   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:12.142940   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:12.142998   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:12.178966   72122 cri.go:89] found id: ""
	I0910 19:02:12.178991   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.179002   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:12.179010   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:12.179069   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:12.216070   72122 cri.go:89] found id: ""
	I0910 19:02:12.216093   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.216104   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:12.216112   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:12.216161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:12.251447   72122 cri.go:89] found id: ""
	I0910 19:02:12.251479   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.251492   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:12.251500   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:12.251568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:12.284640   72122 cri.go:89] found id: ""
	I0910 19:02:12.284666   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.284677   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:12.284682   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:12.284743   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:12.319601   72122 cri.go:89] found id: ""
	I0910 19:02:12.319625   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.319632   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:12.319639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:12.319650   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:12.372932   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:12.372965   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.387204   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:12.387228   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:12.459288   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:12.459308   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:12.459323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:13.223679   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:15.224341   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:14.034036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.533341   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:13.667258   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.164610   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:12.549161   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:12.549198   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:15.092557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:15.105391   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:15.105456   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:15.139486   72122 cri.go:89] found id: ""
	I0910 19:02:15.139515   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.139524   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:15.139530   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:15.139591   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:15.173604   72122 cri.go:89] found id: ""
	I0910 19:02:15.173630   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.173641   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:15.173648   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:15.173710   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:15.208464   72122 cri.go:89] found id: ""
	I0910 19:02:15.208492   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.208503   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:15.208510   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:15.208568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:15.247536   72122 cri.go:89] found id: ""
	I0910 19:02:15.247567   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.247579   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:15.247586   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:15.247650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:15.285734   72122 cri.go:89] found id: ""
	I0910 19:02:15.285764   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.285775   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:15.285782   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:15.285858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:15.320755   72122 cri.go:89] found id: ""
	I0910 19:02:15.320782   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.320792   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:15.320798   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:15.320849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:15.357355   72122 cri.go:89] found id: ""
	I0910 19:02:15.357384   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.357395   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:15.357402   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:15.357463   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:15.392105   72122 cri.go:89] found id: ""
	I0910 19:02:15.392130   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.392137   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:15.392149   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:15.392160   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:15.444433   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:15.444465   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:15.458759   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:15.458784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:15.523490   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:15.523507   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:15.523524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:15.607584   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:15.607616   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:17.224472   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:19.723953   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.534545   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.667949   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.669762   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.146611   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:18.160311   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:18.160378   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:18.195072   72122 cri.go:89] found id: ""
	I0910 19:02:18.195099   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.195109   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:18.195127   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:18.195201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:18.230099   72122 cri.go:89] found id: ""
	I0910 19:02:18.230129   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.230138   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:18.230145   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:18.230201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:18.268497   72122 cri.go:89] found id: ""
	I0910 19:02:18.268525   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.268534   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:18.268539   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:18.268599   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:18.304929   72122 cri.go:89] found id: ""
	I0910 19:02:18.304966   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.304978   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:18.304985   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:18.305048   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:18.339805   72122 cri.go:89] found id: ""
	I0910 19:02:18.339839   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.339861   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:18.339868   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:18.339925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:18.378353   72122 cri.go:89] found id: ""
	I0910 19:02:18.378372   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.378381   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:18.378393   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:18.378438   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:18.415175   72122 cri.go:89] found id: ""
	I0910 19:02:18.415195   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.415203   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:18.415208   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:18.415262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:18.450738   72122 cri.go:89] found id: ""
	I0910 19:02:18.450762   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.450769   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:18.450778   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:18.450793   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:18.530943   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:18.530975   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:18.568983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:18.569021   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:18.622301   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:18.622336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:18.635788   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:18.635815   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:18.715729   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.216082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:21.229419   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:21.229488   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:21.265152   72122 cri.go:89] found id: ""
	I0910 19:02:21.265183   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.265193   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:21.265201   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:21.265262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:21.300766   72122 cri.go:89] found id: ""
	I0910 19:02:21.300797   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.300815   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:21.300823   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:21.300883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:21.333416   72122 cri.go:89] found id: ""
	I0910 19:02:21.333443   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.333452   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:21.333460   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:21.333526   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:21.371112   72122 cri.go:89] found id: ""
	I0910 19:02:21.371142   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.371150   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:21.371156   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:21.371214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:21.405657   72122 cri.go:89] found id: ""
	I0910 19:02:21.405684   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.405695   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:21.405703   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:21.405755   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:21.440354   72122 cri.go:89] found id: ""
	I0910 19:02:21.440381   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.440392   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:21.440400   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:21.440464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:21.480165   72122 cri.go:89] found id: ""
	I0910 19:02:21.480189   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.480199   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:21.480206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:21.480273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:21.518422   72122 cri.go:89] found id: ""
	I0910 19:02:21.518449   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.518459   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:21.518470   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:21.518486   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:21.572263   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:21.572300   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:21.588179   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:21.588204   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:21.658330   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.658356   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:21.658371   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:21.743026   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:21.743063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:21.724730   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.724844   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:26.225026   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.034593   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.037588   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.164712   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.664475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:24.286604   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:24.299783   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:24.299847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:24.336998   72122 cri.go:89] found id: ""
	I0910 19:02:24.337031   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.337042   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:24.337050   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:24.337123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:24.374198   72122 cri.go:89] found id: ""
	I0910 19:02:24.374223   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.374231   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:24.374236   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:24.374289   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:24.407783   72122 cri.go:89] found id: ""
	I0910 19:02:24.407812   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.407822   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:24.407828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:24.407881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:24.443285   72122 cri.go:89] found id: ""
	I0910 19:02:24.443307   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.443315   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:24.443321   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:24.443367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:24.477176   72122 cri.go:89] found id: ""
	I0910 19:02:24.477198   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.477206   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:24.477212   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:24.477266   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:24.509762   72122 cri.go:89] found id: ""
	I0910 19:02:24.509783   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.509791   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:24.509797   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:24.509858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:24.548746   72122 cri.go:89] found id: ""
	I0910 19:02:24.548775   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.548785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:24.548793   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:24.548851   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:24.583265   72122 cri.go:89] found id: ""
	I0910 19:02:24.583297   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.583313   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:24.583324   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:24.583338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:24.634966   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:24.635001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:24.649844   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:24.649869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:24.721795   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:24.721824   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:24.721840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:24.807559   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:24.807593   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.352779   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:27.366423   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:27.366495   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:27.399555   72122 cri.go:89] found id: ""
	I0910 19:02:27.399582   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.399591   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:27.399596   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:27.399662   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:27.434151   72122 cri.go:89] found id: ""
	I0910 19:02:27.434179   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.434188   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:27.434194   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:27.434265   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:27.467053   72122 cri.go:89] found id: ""
	I0910 19:02:27.467081   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.467092   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:27.467099   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:27.467156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:28.724149   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:31.224185   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.533697   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:29.533815   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.034343   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.667816   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:30.164174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.500999   72122 cri.go:89] found id: ""
	I0910 19:02:27.501030   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.501039   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:27.501044   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:27.501114   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:27.537981   72122 cri.go:89] found id: ""
	I0910 19:02:27.538000   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.538007   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:27.538012   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:27.538060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:27.568622   72122 cri.go:89] found id: ""
	I0910 19:02:27.568649   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.568660   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:27.568668   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:27.568724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:27.603035   72122 cri.go:89] found id: ""
	I0910 19:02:27.603058   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.603067   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:27.603072   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:27.603131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:27.637624   72122 cri.go:89] found id: ""
	I0910 19:02:27.637651   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.637662   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:27.637673   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:27.637693   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:27.651893   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:27.651915   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:27.723949   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:27.723969   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:27.723983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:27.801463   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:27.801496   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.841969   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:27.842000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.398857   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:30.412720   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:30.412790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:30.448125   72122 cri.go:89] found id: ""
	I0910 19:02:30.448152   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.448163   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:30.448171   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:30.448234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:30.481988   72122 cri.go:89] found id: ""
	I0910 19:02:30.482016   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.482027   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:30.482035   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:30.482083   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:30.516548   72122 cri.go:89] found id: ""
	I0910 19:02:30.516576   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.516583   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:30.516589   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:30.516646   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:30.566884   72122 cri.go:89] found id: ""
	I0910 19:02:30.566910   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.566918   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:30.566923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:30.566975   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:30.602278   72122 cri.go:89] found id: ""
	I0910 19:02:30.602306   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.602314   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:30.602319   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:30.602379   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:30.636708   72122 cri.go:89] found id: ""
	I0910 19:02:30.636732   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.636740   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:30.636745   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:30.636797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:30.681255   72122 cri.go:89] found id: ""
	I0910 19:02:30.681280   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.681295   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:30.681303   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:30.681361   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:30.715516   72122 cri.go:89] found id: ""
	I0910 19:02:30.715543   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.715551   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:30.715560   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:30.715572   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.768916   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:30.768948   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:30.783318   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:30.783348   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:30.852901   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:30.852925   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:30.852940   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:30.932276   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:30.932314   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.725716   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.223970   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:34.533148   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.533854   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.667516   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:35.164375   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:33.471931   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:33.486152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:33.486211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:33.524130   72122 cri.go:89] found id: ""
	I0910 19:02:33.524161   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.524173   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:33.524180   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:33.524238   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:33.562216   72122 cri.go:89] found id: ""
	I0910 19:02:33.562238   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.562247   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:33.562252   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:33.562305   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:33.596587   72122 cri.go:89] found id: ""
	I0910 19:02:33.596615   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.596626   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:33.596634   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:33.596692   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:33.633307   72122 cri.go:89] found id: ""
	I0910 19:02:33.633330   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.633338   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:33.633343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:33.633403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:33.667780   72122 cri.go:89] found id: ""
	I0910 19:02:33.667805   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.667815   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:33.667820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:33.667878   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:33.702406   72122 cri.go:89] found id: ""
	I0910 19:02:33.702436   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.702447   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:33.702456   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:33.702524   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:33.744544   72122 cri.go:89] found id: ""
	I0910 19:02:33.744574   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.744581   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:33.744587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:33.744661   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:33.782000   72122 cri.go:89] found id: ""
	I0910 19:02:33.782024   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.782032   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:33.782040   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:33.782053   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:33.858087   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:33.858115   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:33.858133   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:33.943238   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:33.943278   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.987776   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:33.987804   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:34.043197   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:34.043232   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.558122   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:36.571125   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:36.571195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:36.606195   72122 cri.go:89] found id: ""
	I0910 19:02:36.606228   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.606239   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:36.606246   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:36.606304   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:36.640248   72122 cri.go:89] found id: ""
	I0910 19:02:36.640290   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.640302   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:36.640310   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:36.640360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:36.676916   72122 cri.go:89] found id: ""
	I0910 19:02:36.676942   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.676952   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:36.676958   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:36.677013   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:36.713183   72122 cri.go:89] found id: ""
	I0910 19:02:36.713207   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.713218   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:36.713225   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:36.713283   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:36.750748   72122 cri.go:89] found id: ""
	I0910 19:02:36.750775   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.750782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:36.750787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:36.750847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:36.782614   72122 cri.go:89] found id: ""
	I0910 19:02:36.782636   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.782644   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:36.782650   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:36.782709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:36.822051   72122 cri.go:89] found id: ""
	I0910 19:02:36.822083   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.822094   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:36.822102   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:36.822161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:36.856068   72122 cri.go:89] found id: ""
	I0910 19:02:36.856096   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.856106   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:36.856117   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:36.856132   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:36.909586   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:36.909625   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.931649   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:36.931676   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:37.040146   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:37.040175   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:37.040194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:37.121902   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:37.121942   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:38.723762   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:40.723880   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:38.534001   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:41.035356   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:37.665212   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.668115   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.164118   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.665474   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:39.678573   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:39.678633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:39.712755   72122 cri.go:89] found id: ""
	I0910 19:02:39.712783   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.712793   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:39.712800   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:39.712857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:39.744709   72122 cri.go:89] found id: ""
	I0910 19:02:39.744738   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.744748   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:39.744756   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:39.744809   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:39.780161   72122 cri.go:89] found id: ""
	I0910 19:02:39.780189   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.780200   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:39.780207   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:39.780255   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:39.817665   72122 cri.go:89] found id: ""
	I0910 19:02:39.817695   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.817704   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:39.817709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:39.817757   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:39.857255   72122 cri.go:89] found id: ""
	I0910 19:02:39.857291   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.857299   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:39.857306   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:39.857381   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:39.893514   72122 cri.go:89] found id: ""
	I0910 19:02:39.893540   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.893550   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:39.893558   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:39.893614   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:39.932720   72122 cri.go:89] found id: ""
	I0910 19:02:39.932753   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.932767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:39.932775   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:39.932835   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:39.977063   72122 cri.go:89] found id: ""
	I0910 19:02:39.977121   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.977135   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:39.977146   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:39.977168   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:39.991414   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:39.991445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:40.066892   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:40.066910   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:40.066922   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:40.150648   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:40.150680   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:40.198519   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:40.198561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:42.724332   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.223804   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:43.533841   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.534665   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:44.164851   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:46.165259   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.749906   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:42.769633   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:42.769703   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:42.812576   72122 cri.go:89] found id: ""
	I0910 19:02:42.812603   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.812613   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:42.812620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:42.812682   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:42.846233   72122 cri.go:89] found id: ""
	I0910 19:02:42.846257   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.846266   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:42.846271   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:42.846326   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:42.883564   72122 cri.go:89] found id: ""
	I0910 19:02:42.883593   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.883605   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:42.883612   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:42.883669   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:42.920774   72122 cri.go:89] found id: ""
	I0910 19:02:42.920801   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.920813   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:42.920820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:42.920883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:42.953776   72122 cri.go:89] found id: ""
	I0910 19:02:42.953808   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.953820   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:42.953829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:42.953887   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:42.989770   72122 cri.go:89] found id: ""
	I0910 19:02:42.989806   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.989820   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:42.989829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:42.989893   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:43.022542   72122 cri.go:89] found id: ""
	I0910 19:02:43.022567   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.022574   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:43.022580   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:43.022629   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:43.064308   72122 cri.go:89] found id: ""
	I0910 19:02:43.064329   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.064337   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:43.064344   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:43.064356   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:43.120212   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:43.120243   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:43.134269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:43.134296   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:43.218840   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:43.218865   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:43.218880   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:43.302560   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:43.302591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:45.842788   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:45.857495   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:45.857557   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:45.892745   72122 cri.go:89] found id: ""
	I0910 19:02:45.892772   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.892782   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:45.892790   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:45.892888   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:45.928451   72122 cri.go:89] found id: ""
	I0910 19:02:45.928476   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.928486   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:45.928493   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:45.928551   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:45.962868   72122 cri.go:89] found id: ""
	I0910 19:02:45.962899   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.962910   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:45.962918   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:45.962979   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:45.996975   72122 cri.go:89] found id: ""
	I0910 19:02:45.997000   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.997009   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:45.997014   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:45.997065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:46.032271   72122 cri.go:89] found id: ""
	I0910 19:02:46.032299   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.032309   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:46.032317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:46.032375   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:46.072629   72122 cri.go:89] found id: ""
	I0910 19:02:46.072654   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.072662   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:46.072667   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:46.072713   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:46.112196   72122 cri.go:89] found id: ""
	I0910 19:02:46.112220   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.112228   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:46.112233   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:46.112298   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:46.155700   72122 cri.go:89] found id: ""
	I0910 19:02:46.155732   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.155745   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:46.155759   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:46.155794   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:46.210596   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:46.210624   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:46.224951   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:46.224980   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:46.294571   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:46.294597   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:46.294613   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:46.382431   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:46.382495   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:47.224808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:49.225392   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:51.227601   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.033643   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.535490   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.665543   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.666596   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.926582   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:48.941256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:48.941338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:48.979810   72122 cri.go:89] found id: ""
	I0910 19:02:48.979842   72122 logs.go:276] 0 containers: []
	W0910 19:02:48.979849   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:48.979856   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:48.979917   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:49.015083   72122 cri.go:89] found id: ""
	I0910 19:02:49.015126   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.015136   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:49.015144   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:49.015205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:49.052417   72122 cri.go:89] found id: ""
	I0910 19:02:49.052445   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.052453   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:49.052459   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:49.052511   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:49.092485   72122 cri.go:89] found id: ""
	I0910 19:02:49.092523   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.092533   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:49.092538   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:49.092588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:49.127850   72122 cri.go:89] found id: ""
	I0910 19:02:49.127882   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.127889   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:49.127897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:49.127952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:49.160693   72122 cri.go:89] found id: ""
	I0910 19:02:49.160724   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.160733   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:49.160740   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:49.160798   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:49.194713   72122 cri.go:89] found id: ""
	I0910 19:02:49.194737   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.194744   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:49.194750   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:49.194804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:49.229260   72122 cri.go:89] found id: ""
	I0910 19:02:49.229283   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.229292   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:49.229303   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:49.229320   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:49.281963   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:49.281992   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:49.294789   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:49.294809   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:49.366126   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:49.366152   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:49.366172   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:49.451187   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:49.451225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:51.990361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:52.003744   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:52.003807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:52.036794   72122 cri.go:89] found id: ""
	I0910 19:02:52.036824   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.036834   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:52.036840   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:52.036896   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:52.074590   72122 cri.go:89] found id: ""
	I0910 19:02:52.074613   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.074620   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:52.074625   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:52.074675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:52.119926   72122 cri.go:89] found id: ""
	I0910 19:02:52.119967   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.119981   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:52.119990   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:52.120075   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:52.157862   72122 cri.go:89] found id: ""
	I0910 19:02:52.157889   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.157900   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:52.157906   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:52.157968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:52.198645   72122 cri.go:89] found id: ""
	I0910 19:02:52.198675   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.198686   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:52.198693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:52.198756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:52.240091   72122 cri.go:89] found id: ""
	I0910 19:02:52.240113   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.240129   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:52.240139   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:52.240197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:52.275046   72122 cri.go:89] found id: ""
	I0910 19:02:52.275079   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.275090   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:52.275098   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:52.275179   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:52.311141   72122 cri.go:89] found id: ""
	I0910 19:02:52.311172   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.311184   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:52.311196   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:52.311211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:52.400004   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:52.400039   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:52.449043   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:52.449090   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:53.724151   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:56.223353   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.033328   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.035259   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.164639   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.165714   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:52.502304   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:52.502336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:52.518747   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:52.518772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:52.593581   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.094092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:55.108752   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:55.108830   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:55.143094   72122 cri.go:89] found id: ""
	I0910 19:02:55.143122   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.143133   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:55.143141   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:55.143198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:55.184298   72122 cri.go:89] found id: ""
	I0910 19:02:55.184326   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.184334   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:55.184340   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:55.184397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:55.216557   72122 cri.go:89] found id: ""
	I0910 19:02:55.216585   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.216596   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:55.216613   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:55.216676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:55.251049   72122 cri.go:89] found id: ""
	I0910 19:02:55.251075   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.251083   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:55.251090   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:55.251152   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:55.282689   72122 cri.go:89] found id: ""
	I0910 19:02:55.282716   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.282724   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:55.282729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:55.282800   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:55.316959   72122 cri.go:89] found id: ""
	I0910 19:02:55.316993   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.317004   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:55.317011   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:55.317085   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:55.353110   72122 cri.go:89] found id: ""
	I0910 19:02:55.353134   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.353143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:55.353149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:55.353205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:55.392391   72122 cri.go:89] found id: ""
	I0910 19:02:55.392422   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.392434   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:55.392446   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:55.392461   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:55.445431   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:55.445469   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:55.459348   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:55.459374   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:55.528934   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.528957   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:55.528973   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:55.610797   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:55.610833   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:58.223882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.223951   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.533754   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:59.535018   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:01.535255   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.667276   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.164510   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:58.152775   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:58.166383   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:58.166440   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:58.203198   72122 cri.go:89] found id: ""
	I0910 19:02:58.203225   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.203233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:58.203239   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:58.203284   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:58.240538   72122 cri.go:89] found id: ""
	I0910 19:02:58.240560   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.240567   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:58.240573   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:58.240633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:58.274802   72122 cri.go:89] found id: ""
	I0910 19:02:58.274826   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.274833   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:58.274839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:58.274886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:58.311823   72122 cri.go:89] found id: ""
	I0910 19:02:58.311857   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.311868   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:58.311876   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:58.311933   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:58.347260   72122 cri.go:89] found id: ""
	I0910 19:02:58.347281   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.347288   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:58.347294   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:58.347338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:58.382621   72122 cri.go:89] found id: ""
	I0910 19:02:58.382645   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.382655   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:58.382662   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:58.382720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:58.418572   72122 cri.go:89] found id: ""
	I0910 19:02:58.418597   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.418605   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:58.418611   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:58.418663   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:58.459955   72122 cri.go:89] found id: ""
	I0910 19:02:58.459987   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.459995   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:58.460003   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:58.460016   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:58.512831   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:58.512866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:58.527036   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:58.527067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:58.593329   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:58.593350   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:58.593366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:58.671171   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:58.671201   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.211905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:01.226567   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:01.226724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:01.261860   72122 cri.go:89] found id: ""
	I0910 19:03:01.261885   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.261893   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:01.261898   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:01.261946   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:01.294754   72122 cri.go:89] found id: ""
	I0910 19:03:01.294774   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.294781   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:01.294786   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:01.294833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:01.328378   72122 cri.go:89] found id: ""
	I0910 19:03:01.328403   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.328412   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:01.328417   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:01.328465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:01.363344   72122 cri.go:89] found id: ""
	I0910 19:03:01.363370   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.363380   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:01.363388   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:01.363446   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:01.398539   72122 cri.go:89] found id: ""
	I0910 19:03:01.398576   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.398586   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:01.398593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:01.398654   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:01.431367   72122 cri.go:89] found id: ""
	I0910 19:03:01.431390   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.431397   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:01.431403   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:01.431458   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:01.464562   72122 cri.go:89] found id: ""
	I0910 19:03:01.464589   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.464599   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:01.464606   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:01.464666   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:01.497493   72122 cri.go:89] found id: ""
	I0910 19:03:01.497520   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.497531   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:01.497540   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:01.497555   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:01.583083   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:01.583140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.624887   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:01.624919   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:01.676124   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:01.676155   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:01.690861   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:01.690894   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:01.763695   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:02.724017   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.725049   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.033371   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:06.033600   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:02.666137   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.669740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.164822   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.264867   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:04.279106   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:04.279176   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:04.315358   72122 cri.go:89] found id: ""
	I0910 19:03:04.315390   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.315398   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:04.315403   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:04.315457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:04.359466   72122 cri.go:89] found id: ""
	I0910 19:03:04.359489   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.359496   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:04.359504   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:04.359563   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:04.399504   72122 cri.go:89] found id: ""
	I0910 19:03:04.399529   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.399538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:04.399545   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:04.399604   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:04.438244   72122 cri.go:89] found id: ""
	I0910 19:03:04.438269   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.438277   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:04.438282   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:04.438340   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:04.475299   72122 cri.go:89] found id: ""
	I0910 19:03:04.475321   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.475329   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:04.475334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:04.475386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:04.516500   72122 cri.go:89] found id: ""
	I0910 19:03:04.516520   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.516529   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:04.516534   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:04.516588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:04.551191   72122 cri.go:89] found id: ""
	I0910 19:03:04.551214   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.551222   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:04.551228   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:04.551273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:04.585646   72122 cri.go:89] found id: ""
	I0910 19:03:04.585667   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.585675   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:04.585684   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:04.585699   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:04.598832   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:04.598858   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:04.670117   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:04.670140   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:04.670156   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:04.746592   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:04.746626   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:04.784061   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:04.784088   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.337082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:07.350696   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:07.350752   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:07.387344   72122 cri.go:89] found id: ""
	I0910 19:03:07.387373   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.387384   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:07.387391   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:07.387449   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:07.420468   72122 cri.go:89] found id: ""
	I0910 19:03:07.420490   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.420498   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:07.420503   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:07.420566   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:07.453746   72122 cri.go:89] found id: ""
	I0910 19:03:07.453773   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.453784   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:07.453791   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:07.453845   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:07.487359   72122 cri.go:89] found id: ""
	I0910 19:03:07.487388   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.487400   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:07.487407   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:07.487470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:07.223432   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.723164   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:08.033767   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:10.035613   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.165972   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:11.663740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.520803   72122 cri.go:89] found id: ""
	I0910 19:03:07.520827   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.520834   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:07.520839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:07.520898   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:07.556908   72122 cri.go:89] found id: ""
	I0910 19:03:07.556934   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.556945   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:07.556953   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:07.557017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:07.596072   72122 cri.go:89] found id: ""
	I0910 19:03:07.596093   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.596102   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:07.596107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:07.596165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:07.631591   72122 cri.go:89] found id: ""
	I0910 19:03:07.631620   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.631630   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:07.631639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:07.631661   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.683892   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:07.683923   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:07.697619   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:07.697645   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:07.766370   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:07.766397   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:07.766413   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:07.854102   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:07.854140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:10.400185   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:10.412771   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:10.412842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:10.447710   72122 cri.go:89] found id: ""
	I0910 19:03:10.447739   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.447750   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:10.447757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:10.447822   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:10.480865   72122 cri.go:89] found id: ""
	I0910 19:03:10.480892   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.480902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:10.480909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:10.480966   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:10.514893   72122 cri.go:89] found id: ""
	I0910 19:03:10.514919   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.514927   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:10.514933   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:10.514994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:10.556332   72122 cri.go:89] found id: ""
	I0910 19:03:10.556374   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.556385   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:10.556392   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:10.556457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:10.590529   72122 cri.go:89] found id: ""
	I0910 19:03:10.590562   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.590573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:10.590581   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:10.590642   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:10.623697   72122 cri.go:89] found id: ""
	I0910 19:03:10.623724   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.623732   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:10.623737   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:10.623788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:10.659236   72122 cri.go:89] found id: ""
	I0910 19:03:10.659259   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.659270   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:10.659277   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:10.659338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:10.693150   72122 cri.go:89] found id: ""
	I0910 19:03:10.693182   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.693192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:10.693202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:10.693217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:10.744624   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:10.744663   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:10.758797   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:10.758822   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:10.853796   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:10.853815   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:10.853827   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:10.937972   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:10.938019   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:11.724808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:14.224052   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:12.535134   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:15.033867   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:17.034507   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.667548   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:16.164483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.481898   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:13.495440   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:13.495505   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:13.531423   72122 cri.go:89] found id: ""
	I0910 19:03:13.531452   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.531463   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:13.531470   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:13.531532   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:13.571584   72122 cri.go:89] found id: ""
	I0910 19:03:13.571607   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.571615   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:13.571620   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:13.571674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:13.609670   72122 cri.go:89] found id: ""
	I0910 19:03:13.609695   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.609702   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:13.609707   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:13.609761   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:13.644726   72122 cri.go:89] found id: ""
	I0910 19:03:13.644755   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.644766   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:13.644773   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:13.644831   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:13.679692   72122 cri.go:89] found id: ""
	I0910 19:03:13.679722   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.679733   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:13.679741   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:13.679791   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:13.717148   72122 cri.go:89] found id: ""
	I0910 19:03:13.717177   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.717186   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:13.717192   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:13.717247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:13.755650   72122 cri.go:89] found id: ""
	I0910 19:03:13.755676   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.755688   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:13.755693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:13.755740   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:13.788129   72122 cri.go:89] found id: ""
	I0910 19:03:13.788158   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.788169   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:13.788179   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:13.788194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:13.865241   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:13.865277   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:13.909205   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:13.909233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:13.963495   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:13.963523   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:13.977311   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:13.977337   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:14.047015   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:16.547505   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:16.568333   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:16.568412   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:16.610705   72122 cri.go:89] found id: ""
	I0910 19:03:16.610734   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.610744   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:16.610752   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:16.610808   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:16.647307   72122 cri.go:89] found id: ""
	I0910 19:03:16.647333   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.647340   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:16.647345   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:16.647409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:16.684513   72122 cri.go:89] found id: ""
	I0910 19:03:16.684536   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.684544   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:16.684549   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:16.684602   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:16.718691   72122 cri.go:89] found id: ""
	I0910 19:03:16.718719   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.718729   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:16.718734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:16.718794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:16.753250   72122 cri.go:89] found id: ""
	I0910 19:03:16.753279   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.753291   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:16.753298   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:16.753358   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:16.788953   72122 cri.go:89] found id: ""
	I0910 19:03:16.788984   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.789001   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:16.789009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:16.789084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:16.823715   72122 cri.go:89] found id: ""
	I0910 19:03:16.823746   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.823760   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:16.823767   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:16.823837   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:16.858734   72122 cri.go:89] found id: ""
	I0910 19:03:16.858758   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.858770   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:16.858780   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:16.858795   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:16.897983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:16.898012   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:16.950981   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:16.951015   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:16.964809   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:16.964839   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:17.039142   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:17.039163   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:17.039177   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:16.724218   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.223909   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.533783   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:21.534203   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:18.164708   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:20.664302   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.619941   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:19.634432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:19.634489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:19.671220   72122 cri.go:89] found id: ""
	I0910 19:03:19.671246   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.671256   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:19.671264   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:19.671322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:19.704251   72122 cri.go:89] found id: ""
	I0910 19:03:19.704278   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.704294   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:19.704301   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:19.704347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:19.745366   72122 cri.go:89] found id: ""
	I0910 19:03:19.745393   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.745403   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:19.745410   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:19.745466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:19.781100   72122 cri.go:89] found id: ""
	I0910 19:03:19.781129   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.781136   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:19.781141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:19.781195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:19.817177   72122 cri.go:89] found id: ""
	I0910 19:03:19.817207   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.817219   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:19.817226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:19.817292   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:19.852798   72122 cri.go:89] found id: ""
	I0910 19:03:19.852829   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.852837   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:19.852842   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:19.852889   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:19.887173   72122 cri.go:89] found id: ""
	I0910 19:03:19.887200   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.887210   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:19.887219   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:19.887409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:19.922997   72122 cri.go:89] found id: ""
	I0910 19:03:19.923026   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.923038   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:19.923049   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:19.923063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:19.975703   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:19.975736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:19.989834   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:19.989866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:20.061312   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:20.061332   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:20.061344   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:20.143045   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:20.143080   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:21.723250   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:23.723771   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.724346   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:24.036790   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:26.533830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.664756   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.164650   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.681900   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:22.694860   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:22.694923   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:22.738529   72122 cri.go:89] found id: ""
	I0910 19:03:22.738553   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.738563   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:22.738570   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:22.738640   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:22.778102   72122 cri.go:89] found id: ""
	I0910 19:03:22.778132   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.778143   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:22.778150   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:22.778207   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:22.813273   72122 cri.go:89] found id: ""
	I0910 19:03:22.813307   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.813320   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:22.813334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:22.813397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:22.849613   72122 cri.go:89] found id: ""
	I0910 19:03:22.849637   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.849646   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:22.849651   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:22.849701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:22.883138   72122 cri.go:89] found id: ""
	I0910 19:03:22.883167   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.883178   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:22.883185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:22.883237   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:22.918521   72122 cri.go:89] found id: ""
	I0910 19:03:22.918550   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.918567   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:22.918574   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:22.918632   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:22.966657   72122 cri.go:89] found id: ""
	I0910 19:03:22.966684   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.966691   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:22.966701   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:22.966762   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:23.022254   72122 cri.go:89] found id: ""
	I0910 19:03:23.022282   72122 logs.go:276] 0 containers: []
	W0910 19:03:23.022290   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:23.022298   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:23.022309   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:23.082347   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:23.082386   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:23.096792   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:23.096814   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:23.172720   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:23.172740   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:23.172754   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:23.256155   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:23.256193   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:25.797211   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:25.810175   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:25.810234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:25.844848   72122 cri.go:89] found id: ""
	I0910 19:03:25.844876   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.844886   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:25.844901   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:25.844968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:25.877705   72122 cri.go:89] found id: ""
	I0910 19:03:25.877736   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.877747   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:25.877755   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:25.877807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:25.913210   72122 cri.go:89] found id: ""
	I0910 19:03:25.913238   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.913248   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:25.913256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:25.913316   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:25.947949   72122 cri.go:89] found id: ""
	I0910 19:03:25.947974   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.947984   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:25.947991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:25.948050   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:25.983487   72122 cri.go:89] found id: ""
	I0910 19:03:25.983511   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.983519   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:25.983524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:25.983573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:26.018176   72122 cri.go:89] found id: ""
	I0910 19:03:26.018201   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.018209   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:26.018214   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:26.018271   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:26.052063   72122 cri.go:89] found id: ""
	I0910 19:03:26.052087   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.052097   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:26.052104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:26.052165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:26.091919   72122 cri.go:89] found id: ""
	I0910 19:03:26.091949   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.091958   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:26.091968   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:26.091983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:26.146059   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:26.146094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:26.160529   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:26.160562   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:26.230742   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:26.230764   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:26.230778   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:26.313191   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:26.313222   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:27.724922   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:30.223811   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.039957   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:31.533256   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:27.665626   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.666857   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:32.165153   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:28.858457   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:28.873725   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:28.873788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:28.922685   72122 cri.go:89] found id: ""
	I0910 19:03:28.922717   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.922729   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:28.922737   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:28.922795   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:28.973236   72122 cri.go:89] found id: ""
	I0910 19:03:28.973260   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.973270   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:28.973277   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:28.973339   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:29.008999   72122 cri.go:89] found id: ""
	I0910 19:03:29.009049   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.009062   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:29.009081   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:29.009148   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:29.049009   72122 cri.go:89] found id: ""
	I0910 19:03:29.049037   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.049047   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:29.049056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:29.049131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:29.089543   72122 cri.go:89] found id: ""
	I0910 19:03:29.089564   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.089573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:29.089578   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:29.089648   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:29.126887   72122 cri.go:89] found id: ""
	I0910 19:03:29.126911   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.126918   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:29.126924   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:29.126969   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:29.161369   72122 cri.go:89] found id: ""
	I0910 19:03:29.161395   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.161405   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:29.161412   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:29.161474   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:29.199627   72122 cri.go:89] found id: ""
	I0910 19:03:29.199652   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.199661   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:29.199672   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:29.199691   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:29.268353   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:29.268386   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:29.268401   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:29.351470   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:29.351504   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:29.391768   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:29.391796   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:29.442705   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:29.442736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:31.957567   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:31.970218   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:31.970274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:32.004870   72122 cri.go:89] found id: ""
	I0910 19:03:32.004898   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.004908   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:32.004915   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:32.004971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:32.045291   72122 cri.go:89] found id: ""
	I0910 19:03:32.045322   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.045331   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:32.045337   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:32.045403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:32.085969   72122 cri.go:89] found id: ""
	I0910 19:03:32.085999   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.086007   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:32.086013   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:32.086067   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:32.120100   72122 cri.go:89] found id: ""
	I0910 19:03:32.120127   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.120135   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:32.120141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:32.120187   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:32.153977   72122 cri.go:89] found id: ""
	I0910 19:03:32.154004   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.154011   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:32.154016   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:32.154065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:32.195980   72122 cri.go:89] found id: ""
	I0910 19:03:32.196005   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.196013   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:32.196019   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:32.196068   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:32.233594   72122 cri.go:89] found id: ""
	I0910 19:03:32.233616   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.233623   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:32.233632   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:32.233677   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:32.268118   72122 cri.go:89] found id: ""
	I0910 19:03:32.268144   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.268152   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:32.268160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:32.268171   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:32.281389   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:32.281416   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:32.359267   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:32.359289   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:32.359304   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:32.445096   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:32.445137   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:32.483288   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:32.483325   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:32.224155   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.724191   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:33.537955   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.033801   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.663475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.665627   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:35.040393   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:35.053698   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:35.053750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:35.087712   72122 cri.go:89] found id: ""
	I0910 19:03:35.087742   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.087751   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:35.087757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:35.087802   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:35.125437   72122 cri.go:89] found id: ""
	I0910 19:03:35.125468   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.125482   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:35.125495   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:35.125562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:35.163885   72122 cri.go:89] found id: ""
	I0910 19:03:35.163914   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.163924   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:35.163931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:35.163989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:35.199426   72122 cri.go:89] found id: ""
	I0910 19:03:35.199459   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.199471   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:35.199479   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:35.199559   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:35.236388   72122 cri.go:89] found id: ""
	I0910 19:03:35.236408   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.236416   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:35.236421   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:35.236465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:35.274797   72122 cri.go:89] found id: ""
	I0910 19:03:35.274817   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.274825   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:35.274830   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:35.274874   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:35.308127   72122 cri.go:89] found id: ""
	I0910 19:03:35.308155   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.308166   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:35.308173   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:35.308230   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:35.340675   72122 cri.go:89] found id: ""
	I0910 19:03:35.340697   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.340704   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:35.340712   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:35.340727   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:35.390806   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:35.390842   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:35.404427   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:35.404458   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:35.471526   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:35.471560   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:35.471575   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:35.547469   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:35.547497   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:37.223464   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:39.224137   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.534280   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:40.534728   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.666077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.165483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.087127   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:38.100195   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:38.100251   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:38.135386   72122 cri.go:89] found id: ""
	I0910 19:03:38.135408   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.135416   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:38.135422   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:38.135480   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:38.168531   72122 cri.go:89] found id: ""
	I0910 19:03:38.168558   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.168568   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:38.168577   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:38.168639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:38.202931   72122 cri.go:89] found id: ""
	I0910 19:03:38.202958   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.202968   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:38.202974   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:38.203030   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:38.239185   72122 cri.go:89] found id: ""
	I0910 19:03:38.239209   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.239219   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:38.239226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:38.239279   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:38.276927   72122 cri.go:89] found id: ""
	I0910 19:03:38.276952   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.276961   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:38.276967   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:38.277035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:38.311923   72122 cri.go:89] found id: ""
	I0910 19:03:38.311951   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.311962   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:38.311971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:38.312034   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:38.344981   72122 cri.go:89] found id: ""
	I0910 19:03:38.345012   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.345023   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:38.345030   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:38.345099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:38.378012   72122 cri.go:89] found id: ""
	I0910 19:03:38.378037   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.378048   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:38.378058   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:38.378076   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:38.449361   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:38.449384   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:38.449396   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:38.530683   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:38.530713   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:38.570047   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:38.570073   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:38.620143   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:38.620176   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.134152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:41.148416   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:41.148509   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:41.186681   72122 cri.go:89] found id: ""
	I0910 19:03:41.186706   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.186713   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:41.186719   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:41.186767   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:41.221733   72122 cri.go:89] found id: ""
	I0910 19:03:41.221758   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.221769   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:41.221776   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:41.221834   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:41.256099   72122 cri.go:89] found id: ""
	I0910 19:03:41.256125   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.256136   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:41.256143   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:41.256194   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:41.289825   72122 cri.go:89] found id: ""
	I0910 19:03:41.289850   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.289860   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:41.289867   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:41.289926   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:41.323551   72122 cri.go:89] found id: ""
	I0910 19:03:41.323581   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.323594   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:41.323601   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:41.323659   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:41.356508   72122 cri.go:89] found id: ""
	I0910 19:03:41.356535   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.356546   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:41.356553   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:41.356608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:41.391556   72122 cri.go:89] found id: ""
	I0910 19:03:41.391579   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.391586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:41.391592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:41.391651   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:41.427685   72122 cri.go:89] found id: ""
	I0910 19:03:41.427711   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.427726   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:41.427743   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:41.427758   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:41.481970   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:41.482001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.495266   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:41.495290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:41.568334   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:41.568357   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:41.568370   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:41.650178   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:41.650211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:43.724494   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:46.223803   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.034100   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.035091   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.167877   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.664633   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:44.193665   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:44.209118   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:44.209197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:44.245792   72122 cri.go:89] found id: ""
	I0910 19:03:44.245819   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.245829   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:44.245834   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:44.245900   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:44.285673   72122 cri.go:89] found id: ""
	I0910 19:03:44.285699   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.285711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:44.285719   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:44.285787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:44.326471   72122 cri.go:89] found id: ""
	I0910 19:03:44.326495   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.326505   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:44.326520   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:44.326589   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:44.367864   72122 cri.go:89] found id: ""
	I0910 19:03:44.367890   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.367898   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:44.367907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:44.367954   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:44.407161   72122 cri.go:89] found id: ""
	I0910 19:03:44.407185   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.407193   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:44.407198   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:44.407256   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:44.446603   72122 cri.go:89] found id: ""
	I0910 19:03:44.446628   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.446638   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:44.446645   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:44.446705   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:44.486502   72122 cri.go:89] found id: ""
	I0910 19:03:44.486526   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.486536   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:44.486543   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:44.486605   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:44.524992   72122 cri.go:89] found id: ""
	I0910 19:03:44.525017   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.525025   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:44.525033   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:44.525044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:44.579387   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:44.579418   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:44.594045   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:44.594070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:44.678857   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:44.678883   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:44.678897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:44.763799   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:44.763830   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:47.305631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:47.319275   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:47.319347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:47.359199   72122 cri.go:89] found id: ""
	I0910 19:03:47.359222   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.359233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:47.359240   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:47.359300   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:47.397579   72122 cri.go:89] found id: ""
	I0910 19:03:47.397602   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.397610   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:47.397616   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:47.397674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:47.431114   72122 cri.go:89] found id: ""
	I0910 19:03:47.431138   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.431146   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:47.431151   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:47.431205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:47.470475   72122 cri.go:89] found id: ""
	I0910 19:03:47.470499   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.470509   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:47.470515   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:47.470573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:48.227625   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.725421   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.534967   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:49.027864   71529 pod_ready.go:82] duration metric: took 4m0.000448579s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:49.027890   71529 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0910 19:03:49.027905   71529 pod_ready.go:39] duration metric: took 4m14.536052937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:49.027929   71529 kubeadm.go:597] duration metric: took 4m22.283340761s to restartPrimaryControlPlane
	W0910 19:03:49.027982   71529 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:03:49.028009   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:03:47.668029   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.164077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.504484   72122 cri.go:89] found id: ""
	I0910 19:03:47.504509   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.504518   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:47.504524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:47.504577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:47.541633   72122 cri.go:89] found id: ""
	I0910 19:03:47.541651   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.541658   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:47.541663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:47.541706   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:47.579025   72122 cri.go:89] found id: ""
	I0910 19:03:47.579051   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.579060   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:47.579068   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:47.579123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:47.612333   72122 cri.go:89] found id: ""
	I0910 19:03:47.612359   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.612370   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:47.612380   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:47.612395   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:47.667214   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:47.667242   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:47.683425   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:47.683466   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:47.749510   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:47.749531   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:47.749543   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:47.830454   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:47.830487   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:50.373207   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:50.387191   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:50.387247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:50.422445   72122 cri.go:89] found id: ""
	I0910 19:03:50.422476   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.422488   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:50.422495   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:50.422562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:50.456123   72122 cri.go:89] found id: ""
	I0910 19:03:50.456145   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.456153   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:50.456157   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:50.456211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:50.488632   72122 cri.go:89] found id: ""
	I0910 19:03:50.488661   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.488672   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:50.488680   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:50.488736   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:50.523603   72122 cri.go:89] found id: ""
	I0910 19:03:50.523628   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.523636   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:50.523641   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:50.523699   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:50.559741   72122 cri.go:89] found id: ""
	I0910 19:03:50.559765   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.559773   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:50.559778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:50.559842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:50.595387   72122 cri.go:89] found id: ""
	I0910 19:03:50.595406   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.595414   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:50.595420   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:50.595472   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:50.628720   72122 cri.go:89] found id: ""
	I0910 19:03:50.628747   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.628767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:50.628774   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:50.628833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:50.660635   72122 cri.go:89] found id: ""
	I0910 19:03:50.660655   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.660663   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:50.660671   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:50.660683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:50.716517   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:50.716544   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:50.731411   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:50.731443   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:50.799252   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:50.799275   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:50.799290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:50.874490   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:50.874524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.222989   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225335   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225365   71627 pod_ready.go:82] duration metric: took 4m0.007907353s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:55.225523   71627 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:03:55.225534   71627 pod_ready.go:39] duration metric: took 4m2.40870138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:55.225551   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:03:55.225579   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:55.225629   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:55.270742   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:55.270761   71627 cri.go:89] found id: ""
	I0910 19:03:55.270768   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:55.270811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.276233   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:55.276283   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:55.316033   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:55.316051   71627 cri.go:89] found id: ""
	I0910 19:03:55.316058   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:55.316103   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.320441   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:55.320494   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:55.354406   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.354428   71627 cri.go:89] found id: ""
	I0910 19:03:55.354435   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:55.354482   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.358553   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:55.358621   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:55.393871   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.393896   71627 cri.go:89] found id: ""
	I0910 19:03:55.393904   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:55.393959   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.398102   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:55.398154   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:55.432605   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.432625   71627 cri.go:89] found id: ""
	I0910 19:03:55.432632   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:55.432686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.437631   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:55.437689   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:55.474250   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.474277   71627 cri.go:89] found id: ""
	I0910 19:03:55.474287   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:55.474352   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.479177   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:55.479235   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:55.514918   71627 cri.go:89] found id: ""
	I0910 19:03:55.514942   71627 logs.go:276] 0 containers: []
	W0910 19:03:55.514951   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:55.514956   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:55.515014   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:55.549310   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.549330   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.549335   71627 cri.go:89] found id: ""
	I0910 19:03:55.549347   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:55.549404   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.553420   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.557502   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:55.557531   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.592661   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:55.592685   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.629876   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:55.629908   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.668935   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:55.668963   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:55.685881   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:55.685906   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:55.815552   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:55.815578   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.854615   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:55.854640   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.906027   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:55.906069   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.943771   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:55.943808   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:52.666368   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.165213   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:53.417835   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:53.430627   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:53.430694   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:53.469953   72122 cri.go:89] found id: ""
	I0910 19:03:53.469981   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.469992   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:53.469999   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:53.470060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:53.503712   72122 cri.go:89] found id: ""
	I0910 19:03:53.503739   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.503750   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:53.503757   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:53.503814   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:53.539875   72122 cri.go:89] found id: ""
	I0910 19:03:53.539895   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.539902   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:53.539907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:53.539952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:53.575040   72122 cri.go:89] found id: ""
	I0910 19:03:53.575067   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.575078   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:53.575085   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:53.575159   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:53.611171   72122 cri.go:89] found id: ""
	I0910 19:03:53.611193   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.611201   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:53.611206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:53.611253   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:53.644467   72122 cri.go:89] found id: ""
	I0910 19:03:53.644494   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.644505   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:53.644513   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:53.644575   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:53.680886   72122 cri.go:89] found id: ""
	I0910 19:03:53.680913   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.680924   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:53.680931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:53.680989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:53.716834   72122 cri.go:89] found id: ""
	I0910 19:03:53.716863   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.716875   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:53.716885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:53.716900   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.755544   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:53.755568   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:53.807382   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:53.807411   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:53.820289   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:53.820311   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:53.891500   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:53.891524   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:53.891540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:56.472368   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:56.491939   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:56.492020   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:56.535575   72122 cri.go:89] found id: ""
	I0910 19:03:56.535603   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.535614   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:56.535620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:56.535672   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:56.570366   72122 cri.go:89] found id: ""
	I0910 19:03:56.570390   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.570398   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:56.570403   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:56.570452   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:56.609486   72122 cri.go:89] found id: ""
	I0910 19:03:56.609524   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.609535   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:56.609542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:56.609608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:56.650268   72122 cri.go:89] found id: ""
	I0910 19:03:56.650295   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.650305   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:56.650312   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:56.650371   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:56.689113   72122 cri.go:89] found id: ""
	I0910 19:03:56.689139   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.689146   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:56.689154   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:56.689214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:56.721546   72122 cri.go:89] found id: ""
	I0910 19:03:56.721568   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.721576   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:56.721582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:56.721639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:56.753149   72122 cri.go:89] found id: ""
	I0910 19:03:56.753171   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.753179   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:56.753185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:56.753233   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:56.786624   72122 cri.go:89] found id: ""
	I0910 19:03:56.786648   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.786658   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:56.786669   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.786683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.840243   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:56.840276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:56.854453   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:56.854475   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:56.928814   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:56.928835   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:56.928849   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:57.012360   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:57.012403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.443638   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:03:56.443684   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.498856   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.498897   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.573520   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:56.573548   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:56.621270   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:56.621301   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.173747   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.190441   71627 api_server.go:72] duration metric: took 4m14.110101643s to wait for apiserver process to appear ...
	I0910 19:03:59.190463   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:03:59.190495   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.190539   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.224716   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.224744   71627 cri.go:89] found id: ""
	I0910 19:03:59.224753   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:59.224811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.229345   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.229412   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.263589   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.263622   71627 cri.go:89] found id: ""
	I0910 19:03:59.263630   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:59.263686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.269664   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.269728   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.312201   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.312224   71627 cri.go:89] found id: ""
	I0910 19:03:59.312233   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:59.312288   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.317991   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.318067   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.360625   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.360650   71627 cri.go:89] found id: ""
	I0910 19:03:59.360657   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:59.360707   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.364948   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.365010   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.404075   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.404096   71627 cri.go:89] found id: ""
	I0910 19:03:59.404103   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:59.404149   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.408098   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.408141   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.443767   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.443792   71627 cri.go:89] found id: ""
	I0910 19:03:59.443802   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:59.443858   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.448348   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.448397   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.485373   71627 cri.go:89] found id: ""
	I0910 19:03:59.485401   71627 logs.go:276] 0 containers: []
	W0910 19:03:59.485409   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.485414   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:59.485470   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:59.522641   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.522660   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.522664   71627 cri.go:89] found id: ""
	I0910 19:03:59.522671   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:59.522726   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.527283   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.531256   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:59.531275   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.576358   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:59.576382   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.625938   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:59.625974   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.664362   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:59.664386   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.718655   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:59.718686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.763954   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.763984   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.785217   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:59.785248   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.836560   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:59.836604   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.878973   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:59.879001   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.929851   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:59.929878   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.400346   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.400384   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:00.442281   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:00.442307   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:00.510448   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:00.510480   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:57.665980   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.666054   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:01.668052   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.558561   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.572993   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.573094   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.618957   72122 cri.go:89] found id: ""
	I0910 19:03:59.618988   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.618999   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:59.619008   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.619072   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.662544   72122 cri.go:89] found id: ""
	I0910 19:03:59.662643   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.662661   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:59.662673   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.662750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.704323   72122 cri.go:89] found id: ""
	I0910 19:03:59.704349   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.704360   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:59.704367   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.704426   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.738275   72122 cri.go:89] found id: ""
	I0910 19:03:59.738301   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.738311   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:59.738317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.738367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.778887   72122 cri.go:89] found id: ""
	I0910 19:03:59.778922   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.778934   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:59.778944   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.779010   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.814953   72122 cri.go:89] found id: ""
	I0910 19:03:59.814985   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.814995   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:59.815003   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.815064   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.850016   72122 cri.go:89] found id: ""
	I0910 19:03:59.850048   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.850061   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.850069   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:59.850131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:59.887546   72122 cri.go:89] found id: ""
	I0910 19:03:59.887589   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.887600   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:59.887613   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:59.887632   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:59.938761   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.938784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.954572   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:59.954603   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:04:00.029593   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:04:00.029622   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:00.029638   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.121427   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.121462   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:02.660924   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:02.674661   72122 kubeadm.go:597] duration metric: took 4m3.166175956s to restartPrimaryControlPlane
	W0910 19:04:02.674744   72122 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:04:02.674769   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:04:03.133507   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:03.150426   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:03.161678   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:03.173362   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:03.173389   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:03.173436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:03.183872   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:03.183934   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:03.193891   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:03.203385   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:03.203450   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:03.216255   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.227938   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:03.228001   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.240799   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:03.252871   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:03.252922   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:03.263682   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:03.337478   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:04:03.337564   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:03.506276   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:03.506454   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:03.506587   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:04:03.697062   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:03.698908   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:03.699004   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:03.699083   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:03.699184   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:03.699270   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:03.699361   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:03.699517   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:03.700040   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:03.700773   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:03.701529   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:03.702334   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:03.702627   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:03.702715   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:03.929760   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:03.992724   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:04.087552   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:04.226550   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:04.244695   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:04.246125   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:04.246187   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:04.396099   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:03.107779   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 19:04:03.112394   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 19:04:03.113347   71627 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:03.113367   71627 api_server.go:131] duration metric: took 3.922898577s to wait for apiserver health ...
	I0910 19:04:03.113375   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:03.113399   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:03.113443   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:03.153182   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.153204   71627 cri.go:89] found id: ""
	I0910 19:04:03.153213   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:04:03.153263   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.157842   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:03.157906   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:03.199572   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:03.199594   71627 cri.go:89] found id: ""
	I0910 19:04:03.199604   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:04:03.199658   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.204332   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:03.204409   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:03.252660   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.252686   71627 cri.go:89] found id: ""
	I0910 19:04:03.252696   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:04:03.252751   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.257850   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:03.257915   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:03.300208   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:03.300226   71627 cri.go:89] found id: ""
	I0910 19:04:03.300235   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:04:03.300294   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.304875   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:03.304953   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:03.346705   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.346734   71627 cri.go:89] found id: ""
	I0910 19:04:03.346744   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:04:03.346807   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.351246   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:03.351314   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:03.391218   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.391240   71627 cri.go:89] found id: ""
	I0910 19:04:03.391247   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:04:03.391290   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.396156   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:03.396264   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:03.437436   71627 cri.go:89] found id: ""
	I0910 19:04:03.437464   71627 logs.go:276] 0 containers: []
	W0910 19:04:03.437473   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:03.437479   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:03.437551   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:03.476396   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.476417   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.476420   71627 cri.go:89] found id: ""
	I0910 19:04:03.476427   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:04:03.476481   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.480969   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.485821   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:03.485843   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:03.537042   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:04:03.537079   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.599059   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:04:03.599102   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.637541   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:04:03.637576   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.682203   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:04:03.682234   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.734965   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:04:03.734992   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.769711   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:04:03.769738   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.805970   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:03.805999   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:04.165756   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:04.165796   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:04.254572   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:04.254609   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:04.272637   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:04.272686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:04.421716   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:04:04.421756   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:04.476657   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:04:04.476701   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:07.038592   71627 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:07.038618   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.038624   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.038628   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.038632   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.038636   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.038639   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.038644   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.038651   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.038658   71627 system_pods.go:74] duration metric: took 3.925277367s to wait for pod list to return data ...
	I0910 19:04:07.038667   71627 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:07.040831   71627 default_sa.go:45] found service account: "default"
	I0910 19:04:07.040854   71627 default_sa.go:55] duration metric: took 2.180848ms for default service account to be created ...
	I0910 19:04:07.040864   71627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:07.045130   71627 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:07.045151   71627 system_pods.go:89] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.045157   71627 system_pods.go:89] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.045162   71627 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.045167   71627 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.045171   71627 system_pods.go:89] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.045175   71627 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.045180   71627 system_pods.go:89] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.045184   71627 system_pods.go:89] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.045191   71627 system_pods.go:126] duration metric: took 4.321406ms to wait for k8s-apps to be running ...
	I0910 19:04:07.045200   71627 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:07.045242   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:07.061292   71627 system_svc.go:56] duration metric: took 16.084643ms WaitForService to wait for kubelet
	I0910 19:04:07.061318   71627 kubeadm.go:582] duration metric: took 4m21.980981405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:07.061342   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:07.064260   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:07.064277   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:07.064288   71627 node_conditions.go:105] duration metric: took 2.940712ms to run NodePressure ...
	I0910 19:04:07.064298   71627 start.go:241] waiting for startup goroutines ...
	I0910 19:04:07.064308   71627 start.go:246] waiting for cluster config update ...
	I0910 19:04:07.064318   71627 start.go:255] writing updated cluster config ...
	I0910 19:04:07.064627   71627 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:07.109814   71627 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:07.111804   71627 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-557504" cluster and "default" namespace by default
	I0910 19:04:04.165083   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:06.663618   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:04.397627   72122 out.go:235]   - Booting up control plane ...
	I0910 19:04:04.397763   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:04.405199   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:04.407281   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:04.408182   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:04.411438   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:04:08.667046   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:11.164622   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.461731   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.433698154s)
	I0910 19:04:15.461801   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:15.483515   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:15.497133   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:15.513903   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:15.513924   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:15.513972   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:15.524468   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:15.524529   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:15.534726   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:15.544892   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:15.544944   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:15.554663   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.564884   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:15.564978   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.574280   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:15.583882   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:15.583932   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:15.593971   71529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:15.639220   71529 kubeadm.go:310] W0910 19:04:15.612221    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.641412   71529 kubeadm.go:310] W0910 19:04:15.614470    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.749471   71529 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:04:13.164865   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.165232   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:17.664384   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:19.664943   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:22.166309   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:24.300945   71529 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 19:04:24.301016   71529 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:24.301143   71529 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:24.301274   71529 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:24.301408   71529 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 19:04:24.301517   71529 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:24.302988   71529 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:24.303079   71529 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:24.303132   71529 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:24.303197   71529 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:24.303252   71529 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:24.303315   71529 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:24.303367   71529 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:24.303443   71529 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:24.303517   71529 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:24.303631   71529 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:24.303737   71529 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:24.303792   71529 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:24.303873   71529 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:24.303954   71529 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:24.304037   71529 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:04:24.304120   71529 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:24.304217   71529 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:24.304299   71529 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:24.304423   71529 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:24.304523   71529 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:24.305839   71529 out.go:235]   - Booting up control plane ...
	I0910 19:04:24.305946   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:24.306046   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:24.306123   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:24.306254   71529 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:24.306338   71529 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:24.306387   71529 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:24.306507   71529 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 19:04:24.306608   71529 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:04:24.306679   71529 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.526264ms
	I0910 19:04:24.306748   71529 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 19:04:24.306801   71529 kubeadm.go:310] [api-check] The API server is healthy after 5.501960865s
	I0910 19:04:24.306887   71529 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 19:04:24.306997   71529 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 19:04:24.307045   71529 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 19:04:24.307202   71529 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-347802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 19:04:24.307250   71529 kubeadm.go:310] [bootstrap-token] Using token: 3uw8fx.h3bliquui6tuj5mh
	I0910 19:04:24.308589   71529 out.go:235]   - Configuring RBAC rules ...
	I0910 19:04:24.308728   71529 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 19:04:24.308847   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 19:04:24.309021   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 19:04:24.309197   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 19:04:24.309330   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 19:04:24.309437   71529 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 19:04:24.309612   71529 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 19:04:24.309681   71529 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 19:04:24.309776   71529 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 19:04:24.309787   71529 kubeadm.go:310] 
	I0910 19:04:24.309865   71529 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 19:04:24.309874   71529 kubeadm.go:310] 
	I0910 19:04:24.309951   71529 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 19:04:24.309963   71529 kubeadm.go:310] 
	I0910 19:04:24.309984   71529 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 19:04:24.310033   71529 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 19:04:24.310085   71529 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 19:04:24.310091   71529 kubeadm.go:310] 
	I0910 19:04:24.310152   71529 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 19:04:24.310164   71529 kubeadm.go:310] 
	I0910 19:04:24.310203   71529 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 19:04:24.310214   71529 kubeadm.go:310] 
	I0910 19:04:24.310262   71529 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 19:04:24.310326   71529 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 19:04:24.310383   71529 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 19:04:24.310390   71529 kubeadm.go:310] 
	I0910 19:04:24.310457   71529 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 19:04:24.310525   71529 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 19:04:24.310531   71529 kubeadm.go:310] 
	I0910 19:04:24.310598   71529 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310705   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 19:04:24.310728   71529 kubeadm.go:310] 	--control-plane 
	I0910 19:04:24.310731   71529 kubeadm.go:310] 
	I0910 19:04:24.310806   71529 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 19:04:24.310814   71529 kubeadm.go:310] 
	I0910 19:04:24.310884   71529 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310978   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 19:04:24.310994   71529 cni.go:84] Creating CNI manager for ""
	I0910 19:04:24.311006   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:04:24.312411   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:04:24.313516   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:04:24.326066   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:04:24.346367   71529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:04:24.346446   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.346475   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-347802 minikube.k8s.io/updated_at=2024_09_10T19_04_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=no-preload-347802 minikube.k8s.io/primary=true
	I0910 19:04:24.374396   71529 ops.go:34] apiserver oom_adj: -16
	I0910 19:04:24.561164   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.061938   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.561435   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.062175   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.561899   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:27.061256   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.664345   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:26.666316   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:27.561862   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.061889   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.562200   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.732352   71529 kubeadm.go:1113] duration metric: took 4.385961888s to wait for elevateKubeSystemPrivileges
	I0910 19:04:28.732387   71529 kubeadm.go:394] duration metric: took 5m2.035769941s to StartCluster
	I0910 19:04:28.732410   71529 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.732497   71529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:04:28.735625   71529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.735909   71529 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:04:28.736234   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:04:28.736296   71529 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:04:28.736417   71529 addons.go:69] Setting storage-provisioner=true in profile "no-preload-347802"
	I0910 19:04:28.736445   71529 addons.go:234] Setting addon storage-provisioner=true in "no-preload-347802"
	W0910 19:04:28.736453   71529 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:04:28.736480   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.736667   71529 addons.go:69] Setting default-storageclass=true in profile "no-preload-347802"
	I0910 19:04:28.736674   71529 addons.go:69] Setting metrics-server=true in profile "no-preload-347802"
	I0910 19:04:28.736703   71529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-347802"
	I0910 19:04:28.736717   71529 addons.go:234] Setting addon metrics-server=true in "no-preload-347802"
	W0910 19:04:28.736727   71529 addons.go:243] addon metrics-server should already be in state true
	I0910 19:04:28.736758   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.737346   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737360   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737401   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737709   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737809   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737832   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737891   71529 out.go:177] * Verifying Kubernetes components...
	I0910 19:04:28.739122   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:04:28.755720   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0910 19:04:28.755754   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0910 19:04:28.756110   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0910 19:04:28.756297   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756298   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756688   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756870   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.756891   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757053   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757092   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757426   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757451   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.757637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.757759   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.758328   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.758368   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.760809   71529 addons.go:234] Setting addon default-storageclass=true in "no-preload-347802"
	W0910 19:04:28.760825   71529 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:04:28.760848   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.761254   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.761285   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.761486   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.761994   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.762024   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.775766   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0910 19:04:28.776199   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.776801   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.776824   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.777167   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.777359   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.777651   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0910 19:04:28.778091   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.778678   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.778696   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.779019   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.779215   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.779616   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.780231   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0910 19:04:28.780605   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.780675   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.781156   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.781183   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.781330   71529 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:04:28.781416   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.781810   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.781841   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.782326   71529 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:04:28.782391   71529 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:28.782408   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:04:28.782425   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.783393   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:04:28.783413   71529 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:04:28.783433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.785287   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785763   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.785792   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785948   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.786114   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.786250   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.786397   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.786768   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787101   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.787124   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787330   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.787492   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.787637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.787747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.802599   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0910 19:04:28.802947   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.803402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.803415   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.803711   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.803882   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.805296   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.805498   71529 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:28.805510   71529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:04:28.805523   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.808615   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809041   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.809056   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809333   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.809518   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.809687   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.809792   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.974399   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:04:29.068531   71529 node_ready.go:35] waiting up to 6m0s for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084281   71529 node_ready.go:49] node "no-preload-347802" has status "Ready":"True"
	I0910 19:04:29.084306   71529 node_ready.go:38] duration metric: took 15.737646ms for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084317   71529 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:29.098794   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:29.122272   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:29.132813   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:29.191758   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:04:29.191777   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:04:29.224998   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:04:29.225019   71529 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:04:29.264455   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:29.264489   71529 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:04:29.369504   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:30.199702   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.066859027s)
	I0910 19:04:30.199757   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199769   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.199850   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077541595s)
	I0910 19:04:30.199895   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199909   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200096   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200135   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200147   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200155   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200154   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200174   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200187   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200201   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200209   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200220   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200387   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200402   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200617   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200655   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200680   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.219416   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.219437   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.219697   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.219705   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.219741   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.366927   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.366957   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367264   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367279   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367288   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.367302   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367506   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367520   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367533   71529 addons.go:475] Verifying addon metrics-server=true in "no-preload-347802"
	I0910 19:04:30.369968   71529 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:04:30.371186   71529 addons.go:510] duration metric: took 1.634894777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:04:31.104506   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:29.164993   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:31.668683   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:33.105761   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:35.606200   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:34.164783   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:36.663840   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:38.106188   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:39.106175   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.106199   71529 pod_ready.go:82] duration metric: took 10.007378894s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.106210   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111333   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.111352   71529 pod_ready.go:82] duration metric: took 5.13344ms for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111362   71529 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116673   71529 pod_ready.go:93] pod "etcd-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.116689   71529 pod_ready.go:82] duration metric: took 5.319986ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116697   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125400   71529 pod_ready.go:93] pod "kube-apiserver-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.125422   71529 pod_ready.go:82] duration metric: took 8.717835ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125433   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133790   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.133807   71529 pod_ready.go:82] duration metric: took 8.36626ms for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133818   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504642   71529 pod_ready.go:93] pod "kube-proxy-gwzhs" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.504665   71529 pod_ready.go:82] duration metric: took 370.840119ms for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504675   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903625   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.903646   71529 pod_ready.go:82] duration metric: took 398.964651ms for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903653   71529 pod_ready.go:39] duration metric: took 10.819325885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:39.903666   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:39.903710   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:39.918479   71529 api_server.go:72] duration metric: took 11.182520681s to wait for apiserver process to appear ...
	I0910 19:04:39.918501   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:39.918521   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 19:04:39.922745   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 19:04:39.923681   71529 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:39.923701   71529 api_server.go:131] duration metric: took 5.193102ms to wait for apiserver health ...
	I0910 19:04:39.923710   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:40.106587   71529 system_pods.go:59] 9 kube-system pods found
	I0910 19:04:40.106614   71529 system_pods.go:61] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.106619   71529 system_pods.go:61] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.106623   71529 system_pods.go:61] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.106626   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.106630   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.106633   71529 system_pods.go:61] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.106637   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.106642   71529 system_pods.go:61] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.106646   71529 system_pods.go:61] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.106652   71529 system_pods.go:74] duration metric: took 182.93737ms to wait for pod list to return data ...
	I0910 19:04:40.106662   71529 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:40.303294   71529 default_sa.go:45] found service account: "default"
	I0910 19:04:40.303316   71529 default_sa.go:55] duration metric: took 196.649242ms for default service account to be created ...
	I0910 19:04:40.303324   71529 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:40.506862   71529 system_pods.go:86] 9 kube-system pods found
	I0910 19:04:40.506894   71529 system_pods.go:89] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.506902   71529 system_pods.go:89] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.506908   71529 system_pods.go:89] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.506913   71529 system_pods.go:89] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.506919   71529 system_pods.go:89] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.506925   71529 system_pods.go:89] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.506931   71529 system_pods.go:89] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.506940   71529 system_pods.go:89] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.506949   71529 system_pods.go:89] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.506963   71529 system_pods.go:126] duration metric: took 203.633111ms to wait for k8s-apps to be running ...
	I0910 19:04:40.506974   71529 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:40.507032   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:40.522711   71529 system_svc.go:56] duration metric: took 15.728044ms WaitForService to wait for kubelet
	I0910 19:04:40.522739   71529 kubeadm.go:582] duration metric: took 11.786784927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:40.522761   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:40.702993   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:40.703011   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:40.703020   71529 node_conditions.go:105] duration metric: took 180.253729ms to run NodePressure ...
	I0910 19:04:40.703031   71529 start.go:241] waiting for startup goroutines ...
	I0910 19:04:40.703037   71529 start.go:246] waiting for cluster config update ...
	I0910 19:04:40.703046   71529 start.go:255] writing updated cluster config ...
	I0910 19:04:40.703329   71529 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:40.750434   71529 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:40.752453   71529 out.go:177] * Done! kubectl is now configured to use "no-preload-347802" cluster and "default" namespace by default
	I0910 19:04:37.670616   71183 pod_ready.go:82] duration metric: took 4m0.012645309s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:04:37.670637   71183 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:04:37.670644   71183 pod_ready.go:39] duration metric: took 4m3.614436373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:37.670658   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:37.670693   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:37.670746   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:37.721269   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:37.721295   71183 cri.go:89] found id: ""
	I0910 19:04:37.721303   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:37.721361   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.725648   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:37.725711   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:37.760937   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:37.760967   71183 cri.go:89] found id: ""
	I0910 19:04:37.760978   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:37.761034   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.765181   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:37.765243   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:37.800419   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:37.800447   71183 cri.go:89] found id: ""
	I0910 19:04:37.800457   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:37.800509   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.805255   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:37.805330   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:37.849032   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:37.849055   71183 cri.go:89] found id: ""
	I0910 19:04:37.849064   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:37.849136   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.853148   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:37.853224   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:37.888327   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:37.888352   71183 cri.go:89] found id: ""
	I0910 19:04:37.888361   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:37.888417   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.892721   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:37.892782   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:37.928648   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:37.928671   71183 cri.go:89] found id: ""
	I0910 19:04:37.928679   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:37.928731   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.932746   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:37.932804   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:37.967343   71183 cri.go:89] found id: ""
	I0910 19:04:37.967372   71183 logs.go:276] 0 containers: []
	W0910 19:04:37.967382   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:37.967387   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:37.967435   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:38.004150   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:38.004173   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:38.004176   71183 cri.go:89] found id: ""
	I0910 19:04:38.004183   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:38.004227   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.008118   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.011779   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:38.011799   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:38.026386   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:38.026405   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:38.149296   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:38.149324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:38.200987   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:38.201019   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:38.243953   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:38.243983   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:38.287242   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:38.287272   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:38.329165   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:38.329193   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:38.391117   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:38.391144   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:38.464906   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:38.464944   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:38.979681   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:38.979732   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:39.015604   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:39.015636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:39.055715   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:39.055748   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:39.103920   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:39.103952   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.650354   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:41.667568   71183 api_server.go:72] duration metric: took 4m15.330735169s to wait for apiserver process to appear ...
	I0910 19:04:41.667604   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:41.667636   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:41.667682   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:41.707476   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:41.707507   71183 cri.go:89] found id: ""
	I0910 19:04:41.707520   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:41.707590   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.711732   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:41.711794   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:41.745943   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:41.745963   71183 cri.go:89] found id: ""
	I0910 19:04:41.745972   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:41.746023   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.749930   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:41.749978   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:41.790296   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:41.790318   71183 cri.go:89] found id: ""
	I0910 19:04:41.790327   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:41.790388   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.794933   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:41.794988   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:41.840669   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:41.840695   71183 cri.go:89] found id: ""
	I0910 19:04:41.840704   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:41.840762   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.845674   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:41.845729   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:41.891686   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.891708   71183 cri.go:89] found id: ""
	I0910 19:04:41.891717   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:41.891774   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.896435   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:41.896486   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:41.935802   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:41.935829   71183 cri.go:89] found id: ""
	I0910 19:04:41.935838   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:41.935882   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.940924   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:41.940979   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:41.980326   71183 cri.go:89] found id: ""
	I0910 19:04:41.980349   71183 logs.go:276] 0 containers: []
	W0910 19:04:41.980357   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:41.980362   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:41.980409   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:42.021683   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.021701   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.021704   71183 cri.go:89] found id: ""
	I0910 19:04:42.021711   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:42.021760   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.025986   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.029896   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:42.029919   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:42.101147   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:42.101182   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:42.115299   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:42.115324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:42.230472   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:42.230503   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:42.285314   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:42.285341   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:42.338243   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:42.338283   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:42.380609   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:42.380636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:42.424255   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:42.424290   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:42.481943   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:42.481972   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:42.525590   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:42.525613   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.566519   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:42.566546   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.601221   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:42.601256   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:43.021780   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:43.021816   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:45.569149   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:04:45.575146   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:04:45.576058   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:45.576077   71183 api_server.go:131] duration metric: took 3.908465286s to wait for apiserver health ...
	I0910 19:04:45.576088   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:45.576112   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:45.576159   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:45.631224   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:45.631246   71183 cri.go:89] found id: ""
	I0910 19:04:45.631254   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:45.631310   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.636343   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:45.636408   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:45.675538   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:45.675558   71183 cri.go:89] found id: ""
	I0910 19:04:45.675565   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:45.675620   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.679865   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:45.679921   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:45.724808   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:45.724835   71183 cri.go:89] found id: ""
	I0910 19:04:45.724844   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:45.724898   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.729083   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:45.729141   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:45.762943   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:45.762965   71183 cri.go:89] found id: ""
	I0910 19:04:45.762973   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:45.763022   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.766889   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:45.766935   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:45.802849   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:45.802875   71183 cri.go:89] found id: ""
	I0910 19:04:45.802883   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:45.802924   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.806796   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:45.806860   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:45.841656   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:45.841675   71183 cri.go:89] found id: ""
	I0910 19:04:45.841682   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:45.841722   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.846078   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:45.846145   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:45.883750   71183 cri.go:89] found id: ""
	I0910 19:04:45.883773   71183 logs.go:276] 0 containers: []
	W0910 19:04:45.883787   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:45.883795   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:45.883857   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:45.918786   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:45.918815   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.918822   71183 cri.go:89] found id: ""
	I0910 19:04:45.918829   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:45.918876   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.923329   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.927395   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:45.927417   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.963527   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:45.963557   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:46.364843   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:46.364886   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:46.379339   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:46.379366   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:46.483159   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:46.483190   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:46.523850   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:46.523877   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:46.574864   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:46.574905   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:46.613765   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:46.613793   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:46.659791   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:46.659819   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:46.722103   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:46.722138   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:46.794098   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:46.794140   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:46.850112   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:46.850148   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:46.899733   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:46.899770   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:44.413134   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:04:44.413215   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:44.413400   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:49.448164   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:49.448194   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.448201   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.448207   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.448216   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.448220   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.448225   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.448232   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.448239   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.448248   71183 system_pods.go:74] duration metric: took 3.872154051s to wait for pod list to return data ...
	I0910 19:04:49.448255   71183 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:49.450795   71183 default_sa.go:45] found service account: "default"
	I0910 19:04:49.450816   71183 default_sa.go:55] duration metric: took 2.553358ms for default service account to be created ...
	I0910 19:04:49.450826   71183 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:49.454993   71183 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:49.455015   71183 system_pods.go:89] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.455020   71183 system_pods.go:89] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.455024   71183 system_pods.go:89] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.455030   71183 system_pods.go:89] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.455033   71183 system_pods.go:89] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.455038   71183 system_pods.go:89] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.455047   71183 system_pods.go:89] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.455053   71183 system_pods.go:89] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.455062   71183 system_pods.go:126] duration metric: took 4.230457ms to wait for k8s-apps to be running ...
	I0910 19:04:49.455073   71183 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:49.455130   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:49.471265   71183 system_svc.go:56] duration metric: took 16.184718ms WaitForService to wait for kubelet
	I0910 19:04:49.471293   71183 kubeadm.go:582] duration metric: took 4m23.134472506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:49.471320   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:49.475529   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:49.475548   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:49.475558   71183 node_conditions.go:105] duration metric: took 4.228611ms to run NodePressure ...
	I0910 19:04:49.475567   71183 start.go:241] waiting for startup goroutines ...
	I0910 19:04:49.475577   71183 start.go:246] waiting for cluster config update ...
	I0910 19:04:49.475589   71183 start.go:255] writing updated cluster config ...
	I0910 19:04:49.475827   71183 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:49.522354   71183 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:49.524738   71183 out.go:177] * Done! kubectl is now configured to use "embed-certs-836868" cluster and "default" namespace by default
	I0910 19:04:49.413796   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:49.413967   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:59.414341   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:59.414514   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:19.415680   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:19.415950   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.417770   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:59.418015   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.418035   72122 kubeadm.go:310] 
	I0910 19:05:59.418101   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:05:59.418137   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:05:59.418143   72122 kubeadm.go:310] 
	I0910 19:05:59.418178   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:05:59.418207   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:05:59.418313   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:05:59.418321   72122 kubeadm.go:310] 
	I0910 19:05:59.418443   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:05:59.418477   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:05:59.418519   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:05:59.418527   72122 kubeadm.go:310] 
	I0910 19:05:59.418625   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:05:59.418723   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:05:59.418731   72122 kubeadm.go:310] 
	I0910 19:05:59.418869   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:05:59.418976   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:05:59.419045   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:05:59.419141   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:05:59.419152   72122 kubeadm.go:310] 
	I0910 19:05:59.420015   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:05:59.420093   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:05:59.420165   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0910 19:05:59.420289   72122 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0910 19:05:59.420339   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:06:04.848652   72122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.428289133s)
	I0910 19:06:04.848719   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:06:04.862914   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:06:04.872903   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:06:04.872920   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:06:04.872960   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:06:04.882109   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:06:04.882168   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:06:04.890962   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:06:04.899925   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:06:04.899985   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:06:04.908796   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.917123   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:06:04.917173   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.925821   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:06:04.937885   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:06:04.937963   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:06:04.948108   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:06:05.019246   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:06:05.019321   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:06:05.162639   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:06:05.162770   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:06:05.162918   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:06:05.343270   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:06:05.345092   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:06:05.345189   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:06:05.345299   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:06:05.345417   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:06:05.345497   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:06:05.345606   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:06:05.345718   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:06:05.345981   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:06:05.346367   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:06:05.346822   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:06:05.347133   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:06:05.347246   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:06:05.347346   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:06:05.536681   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:06:05.773929   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:06:05.994857   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:06:06.139145   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:06:06.154510   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:06:06.155479   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:06:06.155548   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:06:06.311520   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:06:06.314167   72122 out.go:235]   - Booting up control plane ...
	I0910 19:06:06.314311   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:06:06.320856   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:06:06.321801   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:06:06.322508   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:06:06.324744   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:06:46.327168   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:06:46.327286   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:46.327534   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:06:51.328423   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:51.328643   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:01.329028   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:01.329315   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:21.329371   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:21.329627   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328238   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:08:01.328535   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328566   72122 kubeadm.go:310] 
	I0910 19:08:01.328625   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:08:01.328688   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:08:01.328701   72122 kubeadm.go:310] 
	I0910 19:08:01.328749   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:08:01.328797   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:08:01.328941   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:08:01.328953   72122 kubeadm.go:310] 
	I0910 19:08:01.329068   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:08:01.329136   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:08:01.329177   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:08:01.329191   72122 kubeadm.go:310] 
	I0910 19:08:01.329310   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:08:01.329377   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:08:01.329383   72122 kubeadm.go:310] 
	I0910 19:08:01.329468   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:08:01.329539   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:08:01.329607   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:08:01.329667   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:08:01.329674   72122 kubeadm.go:310] 
	I0910 19:08:01.330783   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:08:01.330892   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:08:01.330963   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 19:08:01.331020   72122 kubeadm.go:394] duration metric: took 8m1.874926868s to StartCluster
	I0910 19:08:01.331061   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:08:01.331117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:08:01.385468   72122 cri.go:89] found id: ""
	I0910 19:08:01.385492   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.385499   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:08:01.385505   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:08:01.385571   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:08:01.424028   72122 cri.go:89] found id: ""
	I0910 19:08:01.424051   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.424060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:08:01.424064   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:08:01.424121   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:08:01.462946   72122 cri.go:89] found id: ""
	I0910 19:08:01.462973   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.462983   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:08:01.462991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:08:01.463045   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:08:01.498242   72122 cri.go:89] found id: ""
	I0910 19:08:01.498269   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.498278   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:08:01.498283   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:08:01.498329   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:08:01.532917   72122 cri.go:89] found id: ""
	I0910 19:08:01.532946   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.532953   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:08:01.532959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:08:01.533011   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:08:01.567935   72122 cri.go:89] found id: ""
	I0910 19:08:01.567959   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.567967   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:08:01.567973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:08:01.568027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:08:01.601393   72122 cri.go:89] found id: ""
	I0910 19:08:01.601418   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.601426   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:08:01.601432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:08:01.601489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:08:01.639307   72122 cri.go:89] found id: ""
	I0910 19:08:01.639335   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.639345   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:08:01.639358   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:08:01.639373   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:08:01.726566   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:08:01.726591   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:08:01.726614   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:08:01.839965   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:08:01.840004   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:08:01.879658   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:08:01.879687   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:08:01.939066   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:08:01.939102   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0910 19:08:01.955390   72122 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0910 19:08:01.955436   72122 out.go:270] * 
	W0910 19:08:01.955500   72122 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.955524   72122 out.go:270] * 
	W0910 19:08:01.956343   72122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 19:08:01.959608   72122 out.go:201] 
	W0910 19:08:01.960877   72122 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.960929   72122 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0910 19:08:01.960957   72122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0910 19:08:01.962345   72122 out.go:201] 
	
	
	==> CRI-O <==
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.254651061Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995827254618307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07cc0cfe-9001-4580-a129-96670882de0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.255643762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dabfaa3-d246-453e-8ee6-30a107516d6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.255768984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dabfaa3-d246-453e-8ee6-30a107516d6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.255878900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7dabfaa3-d246-453e-8ee6-30a107516d6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.288774802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c96b7f4d-aab5-404f-a032-d306dab4440a name=/runtime.v1.RuntimeService/Version
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.288856960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c96b7f4d-aab5-404f-a032-d306dab4440a name=/runtime.v1.RuntimeService/Version
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.289781903Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8474870e-a62c-45ce-9e37-390e20d77f61 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.290251995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995827290222465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8474870e-a62c-45ce-9e37-390e20d77f61 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.290903341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=317f2f76-4bfb-4bc2-a525-b49b62e260e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.290951273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=317f2f76-4bfb-4bc2-a525-b49b62e260e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.290989890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=317f2f76-4bfb-4bc2-a525-b49b62e260e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.328663215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddbeaf9a-9bb5-4d99-b976-44b4cf636e2e name=/runtime.v1.RuntimeService/Version
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.328738616Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddbeaf9a-9bb5-4d99-b976-44b4cf636e2e name=/runtime.v1.RuntimeService/Version
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.330025483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff106275-f217-4983-9c32-edb365f86986 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.330651739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995827330621742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff106275-f217-4983-9c32-edb365f86986 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.331240910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bda231ca-7f2c-4141-bfe3-7dac6d0c3bdf name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.331334776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bda231ca-7f2c-4141-bfe3-7dac6d0c3bdf name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.331388145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bda231ca-7f2c-4141-bfe3-7dac6d0c3bdf name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.364545376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29f6d3ec-9f4e-44da-8534-555a585b8f39 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.364644823Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29f6d3ec-9f4e-44da-8534-555a585b8f39 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.366210481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4ca9864-85d5-4eb8-8f41-0f76a1e2bad4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.366615278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995827366591096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4ca9864-85d5-4eb8-8f41-0f76a1e2bad4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.367249646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ff79401-d772-44e7-bc67-72c28fe0b4e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.367305787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ff79401-d772-44e7-bc67-72c28fe0b4e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:17:07 old-k8s-version-432422 crio[642]: time="2024-09-10 19:17:07.367340752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5ff79401-d772-44e7-bc67-72c28fe0b4e8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep10 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058119] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044186] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.255058] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.413650] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.079518] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.057884] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065532] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.191553] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.154429] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.265022] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +6.430445] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.070012] kauditd_printk_skb: 130 callbacks suppressed
	[Sep10 19:00] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
	[ +11.862536] kauditd_printk_skb: 46 callbacks suppressed
	[Sep10 19:04] systemd-fstab-generator[5076]: Ignoring "noauto" option for root device
	[Sep10 19:06] systemd-fstab-generator[5359]: Ignoring "noauto" option for root device
	[  +0.066075] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:17:07 up 17 min,  0 users,  load average: 0.00, 0.02, 0.05
	Linux old-k8s-version-432422 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/sock_posix.go:70 +0x1c5
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: net.internetSocket(0x4f7fe40, 0xc0008649c0, 0x48ab5d6, 0x3, 0x4fb9160, 0x0, 0x4fb9160, 0xc0009764e0, 0x1, 0x0, ...)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/ipsock_posix.go:141 +0x145
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: net.(*sysDialer).doDialTCP(0xc000ae3a80, 0x4f7fe40, 0xc0008649c0, 0x0, 0xc0009764e0, 0x3fddce0, 0x70f9210, 0x0)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/tcpsock_posix.go:65 +0xc5
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: net.(*sysDialer).dialTCP(0xc000ae3a80, 0x4f7fe40, 0xc0008649c0, 0x0, 0xc0009764e0, 0x57b620, 0x48ab5d6, 0x7f4254523280)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: net.(*sysDialer).dialSingle(0xc000ae3a80, 0x4f7fe40, 0xc0008649c0, 0x4f1ff00, 0xc0009764e0, 0x0, 0x0, 0x0, 0x0)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: net.(*sysDialer).dialSerial(0xc000ae3a80, 0x4f7fe40, 0xc0008649c0, 0xc000327670, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/dial.go:548 +0x152
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: net.(*Dialer).DialContext(0xc00013a240, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0007633b0, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000aee3a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0007633b0, 0x24, 0x60, 0x7f4254541550, 0x118, ...)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: net/http.(*Transport).dial(0xc000886780, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0007633b0, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: net/http.(*Transport).dialConn(0xc000886780, 0x4f7fe00, 0xc000052030, 0x0, 0xc000364540, 0x5, 0xc0007633b0, 0x24, 0x0, 0xc00067b680, ...)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: net/http.(*Transport).dialConnFor(0xc000886780, 0xc0008902c0)
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]: created by net/http.(*Transport).queueForDial
	Sep 10 19:17:07 old-k8s-version-432422 kubelet[6534]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 10 19:17:07 old-k8s-version-432422 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 10 19:17:07 old-k8s-version-432422 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 2 (231.95184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-432422" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (477.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-10 19:21:06.785523774 +0000 UTC m=+6726.609298530
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-557504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-557504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.437µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-557504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-557504 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-557504 logs -n 25: (1.096796868s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-557504  | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-836868                 | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-432422        | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-347802                  | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-557504       | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-432422             | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 19:19 UTC | 10 Sep 24 19:19 UTC |
	| start   | -p newest-cni-374465 --memory=2200 --alsologtostderr   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:19 UTC | 10 Sep 24 19:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 19:19 UTC | 10 Sep 24 19:19 UTC |
	| addons  | enable metrics-server -p newest-cni-374465             | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-374465                                   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-374465                  | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-374465 --memory=2200 --alsologtostderr   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	| image   | newest-cni-374465 image list                           | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-374465                                   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-374465                                   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-374465                                   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	| delete  | -p newest-cni-374465                                   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 19:20:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 19:20:15.989289   79419 out.go:345] Setting OutFile to fd 1 ...
	I0910 19:20:15.989379   79419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:20:15.989385   79419 out.go:358] Setting ErrFile to fd 2...
	I0910 19:20:15.989389   79419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:20:15.989546   79419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 19:20:15.990098   79419 out.go:352] Setting JSON to false
	I0910 19:20:15.990948   79419 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7368,"bootTime":1725988648,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 19:20:15.991000   79419 start.go:139] virtualization: kvm guest
	I0910 19:20:15.993101   79419 out.go:177] * [newest-cni-374465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 19:20:15.994337   79419 notify.go:220] Checking for updates...
	I0910 19:20:15.994349   79419 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 19:20:15.995671   79419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 19:20:15.996919   79419 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:20:15.998179   79419 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 19:20:15.999355   79419 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 19:20:16.000420   79419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 19:20:16.002129   79419 config.go:182] Loaded profile config "newest-cni-374465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:20:16.002794   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:16.002866   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:16.017632   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
	I0910 19:20:16.017983   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:16.018522   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:16.018544   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:16.018898   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:16.019068   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:16.019291   79419 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 19:20:16.019553   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:16.019585   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:16.034064   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0910 19:20:16.034534   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:16.035060   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:16.035091   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:16.035428   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:16.035624   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:16.074812   79419 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 19:20:16.075879   79419 start.go:297] selected driver: kvm2
	I0910 19:20:16.075895   79419 start.go:901] validating driver "kvm2" against &{Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:20:16.075983   79419 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 19:20:16.076614   79419 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 19:20:16.076670   79419 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 19:20:16.091258   79419 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 19:20:16.091609   79419 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0910 19:20:16.091666   79419 cni.go:84] Creating CNI manager for ""
	I0910 19:20:16.091679   79419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:20:16.091714   79419 start.go:340] cluster config:
	{Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-374465 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:20:16.091829   79419 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 19:20:16.093929   79419 out.go:177] * Starting "newest-cni-374465" primary control-plane node in "newest-cni-374465" cluster
	I0910 19:20:16.095091   79419 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:20:16.095146   79419 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 19:20:16.095159   79419 cache.go:56] Caching tarball of preloaded images
	I0910 19:20:16.095236   79419 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 19:20:16.095246   79419 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 19:20:16.095363   79419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/config.json ...
	I0910 19:20:16.095691   79419 start.go:360] acquireMachinesLock for newest-cni-374465: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:20:16.095760   79419 start.go:364] duration metric: took 41.084µs to acquireMachinesLock for "newest-cni-374465"
	I0910 19:20:16.095781   79419 start.go:96] Skipping create...Using existing machine configuration
	I0910 19:20:16.095793   79419 fix.go:54] fixHost starting: 
	I0910 19:20:16.096144   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:16.096176   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:16.110152   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0910 19:20:16.110556   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:16.111068   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:16.111093   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:16.111563   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:16.111732   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:16.111887   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetState
	I0910 19:20:16.113626   79419 fix.go:112] recreateIfNeeded on newest-cni-374465: state=Stopped err=<nil>
	I0910 19:20:16.113648   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	W0910 19:20:16.113806   79419 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 19:20:16.115370   79419 out.go:177] * Restarting existing kvm2 VM for "newest-cni-374465" ...
	I0910 19:20:16.116374   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Start
	I0910 19:20:16.116541   79419 main.go:141] libmachine: (newest-cni-374465) Ensuring networks are active...
	I0910 19:20:16.117202   79419 main.go:141] libmachine: (newest-cni-374465) Ensuring network default is active
	I0910 19:20:16.117507   79419 main.go:141] libmachine: (newest-cni-374465) Ensuring network mk-newest-cni-374465 is active
	I0910 19:20:16.117898   79419 main.go:141] libmachine: (newest-cni-374465) Getting domain xml...
	I0910 19:20:16.118662   79419 main.go:141] libmachine: (newest-cni-374465) Creating domain...
	I0910 19:20:17.459480   79419 main.go:141] libmachine: (newest-cni-374465) Waiting to get IP...
	I0910 19:20:17.460373   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:17.460758   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:17.460841   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:17.460758   79473 retry.go:31] will retry after 291.372034ms: waiting for machine to come up
	I0910 19:20:17.754444   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:17.754922   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:17.754953   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:17.754828   79473 retry.go:31] will retry after 261.718497ms: waiting for machine to come up
	I0910 19:20:18.018220   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:18.018708   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:18.018739   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:18.018658   79473 retry.go:31] will retry after 294.973114ms: waiting for machine to come up
	I0910 19:20:18.315194   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:18.394046   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:18.394082   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:18.393978   79473 retry.go:31] will retry after 422.672213ms: waiting for machine to come up
	I0910 19:20:18.818528   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:18.818962   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:18.818987   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:18.818918   79473 retry.go:31] will retry after 748.951406ms: waiting for machine to come up
	I0910 19:20:19.569834   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:19.570296   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:19.570335   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:19.570252   79473 retry.go:31] will retry after 572.492071ms: waiting for machine to come up
	I0910 19:20:20.144054   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:20.144504   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:20.144531   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:20.144456   79473 retry.go:31] will retry after 1.073703244s: waiting for machine to come up
	I0910 19:20:21.220142   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:21.220567   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:21.220589   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:21.220518   79473 retry.go:31] will retry after 1.044962647s: waiting for machine to come up
	I0910 19:20:22.267456   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:22.267936   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:22.267956   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:22.267901   79473 retry.go:31] will retry after 1.22573115s: waiting for machine to come up
	I0910 19:20:23.495148   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:23.495520   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:23.495557   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:23.495471   79473 retry.go:31] will retry after 1.621357282s: waiting for machine to come up
	I0910 19:20:25.118419   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:25.118944   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:25.118970   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:25.118910   79473 retry.go:31] will retry after 2.795668721s: waiting for machine to come up
	I0910 19:20:27.916874   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:27.917318   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:27.917347   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:27.917254   79473 retry.go:31] will retry after 2.588003517s: waiting for machine to come up
	I0910 19:20:30.507894   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:30.508354   79419 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:20:30.508377   79419 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:20:30.508305   79473 retry.go:31] will retry after 4.164019789s: waiting for machine to come up
	I0910 19:20:34.674141   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.674539   79419 main.go:141] libmachine: (newest-cni-374465) Found IP for machine: 192.168.61.46
	I0910 19:20:34.674571   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has current primary IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.674581   79419 main.go:141] libmachine: (newest-cni-374465) Reserving static IP address...
	I0910 19:20:34.674934   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "newest-cni-374465", mac: "52:54:00:03:a9:68", ip: "192.168.61.46"} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:34.674954   79419 main.go:141] libmachine: (newest-cni-374465) DBG | skip adding static IP to network mk-newest-cni-374465 - found existing host DHCP lease matching {name: "newest-cni-374465", mac: "52:54:00:03:a9:68", ip: "192.168.61.46"}
	I0910 19:20:34.674976   79419 main.go:141] libmachine: (newest-cni-374465) Reserved static IP address: 192.168.61.46
	I0910 19:20:34.674992   79419 main.go:141] libmachine: (newest-cni-374465) Waiting for SSH to be available...
	I0910 19:20:34.675007   79419 main.go:141] libmachine: (newest-cni-374465) DBG | Getting to WaitForSSH function...
	I0910 19:20:34.676822   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.677127   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:34.677153   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.677287   79419 main.go:141] libmachine: (newest-cni-374465) DBG | Using SSH client type: external
	I0910 19:20:34.677313   79419 main.go:141] libmachine: (newest-cni-374465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa (-rw-------)
	I0910 19:20:34.677359   79419 main.go:141] libmachine: (newest-cni-374465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.46 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 19:20:34.677374   79419 main.go:141] libmachine: (newest-cni-374465) DBG | About to run SSH command:
	I0910 19:20:34.677389   79419 main.go:141] libmachine: (newest-cni-374465) DBG | exit 0
	I0910 19:20:34.796756   79419 main.go:141] libmachine: (newest-cni-374465) DBG | SSH cmd err, output: <nil>: 
	I0910 19:20:34.797102   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetConfigRaw
	I0910 19:20:34.797763   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetIP
	I0910 19:20:34.799958   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.800193   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:34.800236   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.800425   79419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/config.json ...
	I0910 19:20:34.800684   79419 machine.go:93] provisionDockerMachine start ...
	I0910 19:20:34.800702   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:34.800916   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:34.802745   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.802964   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:34.802998   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.803115   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:34.803293   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:34.803476   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:34.803642   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:34.803803   79419 main.go:141] libmachine: Using SSH client type: native
	I0910 19:20:34.803980   79419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:20:34.803990   79419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:20:34.901503   79419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:20:34.901535   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetMachineName
	I0910 19:20:34.901771   79419 buildroot.go:166] provisioning hostname "newest-cni-374465"
	I0910 19:20:34.901802   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetMachineName
	I0910 19:20:34.901990   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:34.904422   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.904751   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:34.904778   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:34.904870   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:34.905034   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:34.905190   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:34.905336   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:34.905523   79419 main.go:141] libmachine: Using SSH client type: native
	I0910 19:20:34.905730   79419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:20:34.905743   79419 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-374465 && echo "newest-cni-374465" | sudo tee /etc/hostname
	I0910 19:20:35.020694   79419 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374465
	
	I0910 19:20:35.020724   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:35.023407   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.023782   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.023808   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.024037   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:35.024230   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.024384   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.024497   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:35.024653   79419 main.go:141] libmachine: Using SSH client type: native
	I0910 19:20:35.024806   79419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:20:35.024821   79419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-374465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-374465/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-374465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:20:35.135787   79419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:20:35.135812   79419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 19:20:35.135861   79419 buildroot.go:174] setting up certificates
	I0910 19:20:35.135869   79419 provision.go:84] configureAuth start
	I0910 19:20:35.135881   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetMachineName
	I0910 19:20:35.136206   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetIP
	I0910 19:20:35.138623   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.138994   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.139016   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.139170   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:35.141463   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.141844   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.141880   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.142033   79419 provision.go:143] copyHostCerts
	I0910 19:20:35.142165   79419 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 19:20:35.142183   79419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 19:20:35.142267   79419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 19:20:35.142374   79419 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 19:20:35.142384   79419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 19:20:35.142410   79419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 19:20:35.142478   79419 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 19:20:35.142487   79419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 19:20:35.142529   79419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 19:20:35.142608   79419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.newest-cni-374465 san=[127.0.0.1 192.168.61.46 localhost minikube newest-cni-374465]
	I0910 19:20:35.244934   79419 provision.go:177] copyRemoteCerts
	I0910 19:20:35.244987   79419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:20:35.245011   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:35.247454   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.247737   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.247772   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.247926   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:35.248125   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.248295   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:35.248419   79419 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:20:35.327353   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 19:20:35.352462   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 19:20:35.377361   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 19:20:35.401231   79419 provision.go:87] duration metric: took 265.347895ms to configureAuth
	I0910 19:20:35.401257   79419 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:20:35.401489   79419 config.go:182] Loaded profile config "newest-cni-374465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:20:35.401592   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:35.404191   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.404575   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.404603   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.404752   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:35.404925   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.405099   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.405241   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:35.405398   79419 main.go:141] libmachine: Using SSH client type: native
	I0910 19:20:35.405600   79419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:20:35.405621   79419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 19:20:35.619192   79419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 19:20:35.619216   79419 machine.go:96] duration metric: took 818.518327ms to provisionDockerMachine
	I0910 19:20:35.619227   79419 start.go:293] postStartSetup for "newest-cni-374465" (driver="kvm2")
	I0910 19:20:35.619259   79419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:20:35.619279   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:35.619600   79419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:20:35.619625   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:35.622008   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.622405   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.622434   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.622608   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:35.622760   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.622909   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:35.623037   79419 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:20:35.704702   79419 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:20:35.709066   79419 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:20:35.709101   79419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 19:20:35.709172   79419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 19:20:35.709290   79419 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 19:20:35.709396   79419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:20:35.718909   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:20:35.741341   79419 start.go:296] duration metric: took 122.104197ms for postStartSetup
	I0910 19:20:35.741371   79419 fix.go:56] duration metric: took 19.645584957s for fixHost
	I0910 19:20:35.741389   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:35.744017   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.744352   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.744379   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.744542   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:35.744711   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.744821   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.744924   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:35.745064   79419 main.go:141] libmachine: Using SSH client type: native
	I0910 19:20:35.745264   79419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:20:35.745287   79419 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:20:35.845744   79419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725996035.820650061
	
	I0910 19:20:35.845768   79419 fix.go:216] guest clock: 1725996035.820650061
	I0910 19:20:35.845777   79419 fix.go:229] Guest: 2024-09-10 19:20:35.820650061 +0000 UTC Remote: 2024-09-10 19:20:35.741374971 +0000 UTC m=+19.787620952 (delta=79.27509ms)
	I0910 19:20:35.845803   79419 fix.go:200] guest clock delta is within tolerance: 79.27509ms
	I0910 19:20:35.845811   79419 start.go:83] releasing machines lock for "newest-cni-374465", held for 19.750037615s
	I0910 19:20:35.845855   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:35.846106   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetIP
	I0910 19:20:35.848806   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.849219   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.849249   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.849382   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:35.849932   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:35.850092   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:35.850188   79419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 19:20:35.850238   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:35.850264   79419 ssh_runner.go:195] Run: cat /version.json
	I0910 19:20:35.850288   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:35.853110   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.853340   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.853514   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.853561   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.853707   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:35.853870   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.853942   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:35.853961   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:35.854047   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:35.854118   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:35.854184   79419 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:20:35.854269   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:35.854382   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:35.854521   79419 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:20:35.948365   79419 ssh_runner.go:195] Run: systemctl --version
	I0910 19:20:35.954541   79419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 19:20:36.099278   79419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 19:20:36.105659   79419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:20:36.105731   79419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:20:36.124265   79419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:20:36.124297   79419 start.go:495] detecting cgroup driver to use...
	I0910 19:20:36.124386   79419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:20:36.143354   79419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:20:36.157824   79419 docker.go:217] disabling cri-docker service (if available) ...
	I0910 19:20:36.157888   79419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 19:20:36.173670   79419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 19:20:36.189013   79419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 19:20:36.324601   79419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 19:20:36.470810   79419 docker.go:233] disabling docker service ...
	I0910 19:20:36.470874   79419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 19:20:36.485158   79419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 19:20:36.498499   79419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 19:20:36.632113   79419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 19:20:36.761835   79419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 19:20:36.775554   79419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:20:36.794795   79419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 19:20:36.794858   79419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:20:36.805532   79419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 19:20:36.805588   79419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:20:36.815678   79419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:20:36.826488   79419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:20:36.836698   79419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:20:36.847267   79419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:20:36.858227   79419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:20:36.875705   79419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:20:36.886009   79419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:20:36.895373   79419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 19:20:36.895440   79419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 19:20:36.908819   79419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:20:36.918332   79419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:20:37.039271   79419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 19:20:37.135189   79419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 19:20:37.135263   79419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 19:20:37.141918   79419 start.go:563] Will wait 60s for crictl version
	I0910 19:20:37.141978   79419 ssh_runner.go:195] Run: which crictl
	I0910 19:20:37.145602   79419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:20:37.192538   79419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 19:20:37.192636   79419 ssh_runner.go:195] Run: crio --version
	I0910 19:20:37.219518   79419 ssh_runner.go:195] Run: crio --version
	I0910 19:20:37.253302   79419 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 19:20:37.254676   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetIP
	I0910 19:20:37.257287   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:37.257571   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:37.257615   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:37.257762   79419 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 19:20:37.261780   79419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:20:37.275539   79419 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0910 19:20:37.276548   79419 kubeadm.go:883] updating cluster {Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:20:37.276669   79419 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:20:37.276732   79419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:20:37.311892   79419 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 19:20:37.311961   79419 ssh_runner.go:195] Run: which lz4
	I0910 19:20:37.316128   79419 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 19:20:37.320136   79419 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:20:37.320166   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 19:20:38.656403   79419 crio.go:462] duration metric: took 1.340300953s to copy over tarball
	I0910 19:20:38.656463   79419 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 19:20:40.717321   79419 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.06082825s)
	I0910 19:20:40.717350   79419 crio.go:469] duration metric: took 2.060924863s to extract the tarball
	I0910 19:20:40.717360   79419 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 19:20:40.756196   79419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:20:40.807859   79419 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 19:20:40.807880   79419 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:20:40.807890   79419 kubeadm.go:934] updating node { 192.168.61.46 8443 v1.31.0 crio true true} ...
	I0910 19:20:40.808019   79419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-374465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:20:40.808103   79419 ssh_runner.go:195] Run: crio config
	I0910 19:20:40.854888   79419 cni.go:84] Creating CNI manager for ""
	I0910 19:20:40.854907   79419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:20:40.854915   79419 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0910 19:20:40.854937   79419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.46 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-374465 NodeName:newest-cni-374465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.61.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:20:40.855066   79419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-374465"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.46
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.46"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:20:40.855129   79419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:20:40.865315   79419 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:20:40.865405   79419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:20:40.874775   79419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0910 19:20:40.891479   79419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:20:40.908095   79419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0910 19:20:40.925637   79419 ssh_runner.go:195] Run: grep 192.168.61.46	control-plane.minikube.internal$ /etc/hosts
	I0910 19:20:40.929489   79419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.46	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:20:40.941446   79419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:20:41.081552   79419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:20:41.100217   79419 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465 for IP: 192.168.61.46
	I0910 19:20:41.100251   79419 certs.go:194] generating shared ca certs ...
	I0910 19:20:41.100274   79419 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:20:41.100451   79419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 19:20:41.100513   79419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 19:20:41.100527   79419 certs.go:256] generating profile certs ...
	I0910 19:20:41.100633   79419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/client.key
	I0910 19:20:41.100692   79419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.key.29994378
	I0910 19:20:41.100744   79419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.key
	I0910 19:20:41.100904   79419 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 19:20:41.100947   79419 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 19:20:41.100961   79419 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 19:20:41.101000   79419 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 19:20:41.101034   79419 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 19:20:41.101065   79419 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 19:20:41.101146   79419 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:20:41.102141   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:20:41.135081   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 19:20:41.164709   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:20:41.196296   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 19:20:41.229331   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 19:20:41.260928   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:20:41.289515   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:20:41.312595   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 19:20:41.336223   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 19:20:41.358896   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 19:20:41.381417   79419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:20:41.403588   79419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:20:41.420101   79419 ssh_runner.go:195] Run: openssl version
	I0910 19:20:41.425783   79419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 19:20:41.436096   79419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 19:20:41.440402   79419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 19:20:41.440447   79419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 19:20:41.446256   79419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 19:20:41.456714   79419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 19:20:41.467473   79419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 19:20:41.471839   79419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 19:20:41.471885   79419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 19:20:41.477499   79419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:20:41.487795   79419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:20:41.498031   79419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:20:41.502361   79419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:20:41.502394   79419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:20:41.507836   79419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:20:41.518393   79419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:20:41.522822   79419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:20:41.528605   79419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:20:41.534327   79419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:20:41.540132   79419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:20:41.545569   79419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:20:41.551015   79419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:20:41.556545   79419 kubeadm.go:392] StartCluster: {Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:20:41.556652   79419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 19:20:41.556686   79419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:20:41.593835   79419 cri.go:89] found id: ""
	I0910 19:20:41.593909   79419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:20:41.603954   79419 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 19:20:41.603975   79419 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 19:20:41.604030   79419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 19:20:41.613507   79419 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:20:41.614031   79419 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-374465" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:20:41.614278   79419 kubeconfig.go:62] /home/jenkins/minikube-integration/19598-5973/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-374465" cluster setting kubeconfig missing "newest-cni-374465" context setting]
	I0910 19:20:41.614736   79419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:20:41.615971   79419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:20:41.625478   79419 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.46
	I0910 19:20:41.625510   79419 kubeadm.go:1160] stopping kube-system containers ...
	I0910 19:20:41.625524   79419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 19:20:41.625577   79419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:20:41.659647   79419 cri.go:89] found id: ""
	I0910 19:20:41.659713   79419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 19:20:41.675336   79419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:20:41.684708   79419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:20:41.684725   79419 kubeadm.go:157] found existing configuration files:
	
	I0910 19:20:41.684761   79419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:20:41.693786   79419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:20:41.693833   79419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:20:41.703145   79419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:20:41.711826   79419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:20:41.711884   79419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:20:41.721173   79419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:20:41.730539   79419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:20:41.730600   79419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:20:41.740533   79419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:20:41.749384   79419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:20:41.749439   79419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:20:41.758795   79419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:20:41.768202   79419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:20:41.875750   79419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:20:42.663682   79419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:20:42.863525   79419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:20:42.941807   79419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:20:43.059254   79419 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:20:43.059352   79419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:20:43.559568   79419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:20:44.059774   79419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:20:44.559844   79419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:20:45.060054   79419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:20:45.080987   79419 api_server.go:72] duration metric: took 2.021734094s to wait for apiserver process to appear ...
	I0910 19:20:45.081014   79419 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:20:45.081035   79419 api_server.go:253] Checking apiserver healthz at https://192.168.61.46:8443/healthz ...
	I0910 19:20:45.081540   79419 api_server.go:269] stopped: https://192.168.61.46:8443/healthz: Get "https://192.168.61.46:8443/healthz": dial tcp 192.168.61.46:8443: connect: connection refused
	I0910 19:20:45.581100   79419 api_server.go:253] Checking apiserver healthz at https://192.168.61.46:8443/healthz ...
	I0910 19:20:48.210604   79419 api_server.go:279] https://192.168.61.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:20:48.210632   79419 api_server.go:103] status: https://192.168.61.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:20:48.210646   79419 api_server.go:253] Checking apiserver healthz at https://192.168.61.46:8443/healthz ...
	I0910 19:20:48.245462   79419 api_server.go:279] https://192.168.61.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:20:48.245494   79419 api_server.go:103] status: https://192.168.61.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:20:48.581935   79419 api_server.go:253] Checking apiserver healthz at https://192.168.61.46:8443/healthz ...
	I0910 19:20:48.586236   79419 api_server.go:279] https://192.168.61.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:20:48.586264   79419 api_server.go:103] status: https://192.168.61.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:20:49.081832   79419 api_server.go:253] Checking apiserver healthz at https://192.168.61.46:8443/healthz ...
	I0910 19:20:49.101242   79419 api_server.go:279] https://192.168.61.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:20:49.101277   79419 api_server.go:103] status: https://192.168.61.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:20:49.581635   79419 api_server.go:253] Checking apiserver healthz at https://192.168.61.46:8443/healthz ...
	I0910 19:20:49.590004   79419 api_server.go:279] https://192.168.61.46:8443/healthz returned 200:
	ok
	I0910 19:20:49.596721   79419 api_server.go:141] control plane version: v1.31.0
	I0910 19:20:49.596745   79419 api_server.go:131] duration metric: took 4.515725691s to wait for apiserver health ...
	I0910 19:20:49.596753   79419 cni.go:84] Creating CNI manager for ""
	I0910 19:20:49.596759   79419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:20:49.598182   79419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:20:49.599246   79419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:20:49.619395   79419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:20:49.642762   79419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:20:49.654732   79419 system_pods.go:59] 8 kube-system pods found
	I0910 19:20:49.654772   79419 system_pods.go:61] "coredns-6f6b679f8f-trkv4" [0cb12f1e-e53a-47c9-a659-613cd6a682dd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:20:49.654782   79419 system_pods.go:61] "etcd-newest-cni-374465" [79903ebb-3963-4f3e-9fba-e50173dff714] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:20:49.654797   79419 system_pods.go:61] "kube-apiserver-newest-cni-374465" [f4f457cf-1d78-4bf3-9008-4266d1643964] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:20:49.654809   79419 system_pods.go:61] "kube-controller-manager-newest-cni-374465" [e7faff3a-7cc2-4fde-8440-57dde6c1bc97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:20:49.654822   79419 system_pods.go:61] "kube-proxy-r74bk" [25af96ee-c02b-4dbe-9b4a-387be550fbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 19:20:49.654832   79419 system_pods.go:61] "kube-scheduler-newest-cni-374465" [08379c27-2a00-423b-b28c-92eea5024d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:20:49.654849   79419 system_pods.go:61] "metrics-server-6867b74b74-vxps8" [ff3a97da-72a1-49fe-8f09-a61b8cd35fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:20:49.654860   79419 system_pods.go:61] "storage-provisioner" [7edbaafd-4ff2-41f6-b889-5f16176b7624] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:20:49.654874   79419 system_pods.go:74] duration metric: took 12.08954ms to wait for pod list to return data ...
	I0910 19:20:49.654885   79419 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:20:49.659691   79419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:20:49.659721   79419 node_conditions.go:123] node cpu capacity is 2
	I0910 19:20:49.659736   79419 node_conditions.go:105] duration metric: took 4.845228ms to run NodePressure ...
	I0910 19:20:49.659755   79419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:20:49.934900   79419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:20:49.946605   79419 ops.go:34] apiserver oom_adj: -16
	I0910 19:20:49.946623   79419 kubeadm.go:597] duration metric: took 8.342641791s to restartPrimaryControlPlane
	I0910 19:20:49.946634   79419 kubeadm.go:394] duration metric: took 8.390095743s to StartCluster
	I0910 19:20:49.946653   79419 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:20:49.946728   79419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:20:49.947482   79419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:20:49.947716   79419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:20:49.947779   79419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:20:49.947860   79419 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-374465"
	I0910 19:20:49.947908   79419 addons.go:69] Setting default-storageclass=true in profile "newest-cni-374465"
	I0910 19:20:49.947934   79419 addons.go:69] Setting metrics-server=true in profile "newest-cni-374465"
	I0910 19:20:49.947929   79419 addons.go:69] Setting dashboard=true in profile "newest-cni-374465"
	I0910 19:20:49.947949   79419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-374465"
	I0910 19:20:49.947960   79419 config.go:182] Loaded profile config "newest-cni-374465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:20:49.947971   79419 addons.go:234] Setting addon metrics-server=true in "newest-cni-374465"
	I0910 19:20:49.947971   79419 addons.go:234] Setting addon dashboard=true in "newest-cni-374465"
	W0910 19:20:49.947982   79419 addons.go:243] addon dashboard should already be in state true
	W0910 19:20:49.947985   79419 addons.go:243] addon metrics-server should already be in state true
	I0910 19:20:49.948016   79419 host.go:66] Checking if "newest-cni-374465" exists ...
	I0910 19:20:49.948017   79419 host.go:66] Checking if "newest-cni-374465" exists ...
	I0910 19:20:49.948279   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:49.948326   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:49.948397   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:49.948440   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:49.948473   79419 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-374465"
	W0910 19:20:49.948490   79419 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:20:49.948526   79419 host.go:66] Checking if "newest-cni-374465" exists ...
	I0910 19:20:49.948542   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:49.948579   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:49.948816   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:49.948853   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:49.949311   79419 out.go:177] * Verifying Kubernetes components...
	I0910 19:20:49.950434   79419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:20:49.964385   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I0910 19:20:49.964385   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33453
	I0910 19:20:49.964721   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I0910 19:20:49.964919   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:49.965223   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:49.965309   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:49.965454   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:49.965472   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:49.965739   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:49.965762   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:49.965814   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:49.965845   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:49.966197   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:49.966209   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:49.966262   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:49.966592   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetState
	I0910 19:20:49.966670   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:49.966702   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:49.967072   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I0910 19:20:49.967180   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:49.967246   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:49.967479   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:49.967921   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:49.967938   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:49.968276   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:49.968813   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:49.968846   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:49.970423   79419 addons.go:234] Setting addon default-storageclass=true in "newest-cni-374465"
	W0910 19:20:49.970480   79419 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:20:49.970522   79419 host.go:66] Checking if "newest-cni-374465" exists ...
	I0910 19:20:49.970924   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:49.970997   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:49.985091   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41651
	I0910 19:20:49.985408   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
	I0910 19:20:49.985568   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:49.985840   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:49.986090   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:49.986106   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:49.986202   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:49.986210   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:49.986439   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:49.986524   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:49.986706   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetState
	I0910 19:20:49.986773   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetState
	I0910 19:20:49.986824   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0910 19:20:49.987349   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:49.987761   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:49.987777   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:49.988078   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:49.988639   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:49.988668   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:49.988852   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0910 19:20:49.989177   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:49.989238   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:49.989614   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:49.990997   79419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:20:49.991046   79419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:20:49.992121   79419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:20:49.992151   79419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:20:49.992171   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:49.992293   79419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:20:49.992313   79419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:20:49.992329   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:49.995492   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:49.995515   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:49.995814   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:49.996071   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetState
	I0910 19:20:49.996124   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:49.996250   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:49.996681   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:49.996704   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:49.996908   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:49.997016   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:49.997040   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:49.997057   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:49.997123   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:49.997229   79419 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:20:49.997517   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:49.997708   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:49.997902   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:49.997963   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:49.998203   79419 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:20:49.999457   79419 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0910 19:20:50.000530   79419 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0910 19:20:50.001691   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0910 19:20:50.001715   79419 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0910 19:20:50.001731   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:50.004565   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:50.005041   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:50.005067   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:50.005198   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:50.005361   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:50.005492   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:50.005628   79419 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:20:50.009388   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I0910 19:20:50.009770   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:50.010286   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:50.010314   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:50.010689   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:50.010888   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetState
	I0910 19:20:50.012446   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:50.012644   79419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:20:50.012658   79419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:20:50.012672   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:20:50.015628   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:50.015948   79419 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:20:27 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:20:50.015973   79419 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:20:50.016152   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:20:50.016309   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:20:50.016444   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:20:50.016538   79419 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:20:50.187873   79419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:20:50.216682   79419 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:20:50.216765   79419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:20:50.231717   79419 api_server.go:72] duration metric: took 283.969884ms to wait for apiserver process to appear ...
	I0910 19:20:50.231746   79419 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:20:50.231768   79419 api_server.go:253] Checking apiserver healthz at https://192.168.61.46:8443/healthz ...
	I0910 19:20:50.236163   79419 api_server.go:279] https://192.168.61.46:8443/healthz returned 200:
	ok
	I0910 19:20:50.237255   79419 api_server.go:141] control plane version: v1.31.0
	I0910 19:20:50.237275   79419 api_server.go:131] duration metric: took 5.5233ms to wait for apiserver health ...
	I0910 19:20:50.237283   79419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:20:50.243132   79419 system_pods.go:59] 8 kube-system pods found
	I0910 19:20:50.243160   79419 system_pods.go:61] "coredns-6f6b679f8f-trkv4" [0cb12f1e-e53a-47c9-a659-613cd6a682dd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:20:50.243177   79419 system_pods.go:61] "etcd-newest-cni-374465" [79903ebb-3963-4f3e-9fba-e50173dff714] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:20:50.243190   79419 system_pods.go:61] "kube-apiserver-newest-cni-374465" [f4f457cf-1d78-4bf3-9008-4266d1643964] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:20:50.243209   79419 system_pods.go:61] "kube-controller-manager-newest-cni-374465" [e7faff3a-7cc2-4fde-8440-57dde6c1bc97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:20:50.243221   79419 system_pods.go:61] "kube-proxy-r74bk" [25af96ee-c02b-4dbe-9b4a-387be550fbf8] Running
	I0910 19:20:50.243231   79419 system_pods.go:61] "kube-scheduler-newest-cni-374465" [08379c27-2a00-423b-b28c-92eea5024d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:20:50.243254   79419 system_pods.go:61] "metrics-server-6867b74b74-vxps8" [ff3a97da-72a1-49fe-8f09-a61b8cd35fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:20:50.243266   79419 system_pods.go:61] "storage-provisioner" [7edbaafd-4ff2-41f6-b889-5f16176b7624] Running
	I0910 19:20:50.243275   79419 system_pods.go:74] duration metric: took 5.9863ms to wait for pod list to return data ...
	I0910 19:20:50.243287   79419 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:20:50.245679   79419 default_sa.go:45] found service account: "default"
	I0910 19:20:50.245698   79419 default_sa.go:55] duration metric: took 2.40071ms for default service account to be created ...
	I0910 19:20:50.245710   79419 kubeadm.go:582] duration metric: took 297.96555ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0910 19:20:50.245729   79419 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:20:50.247959   79419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:20:50.247981   79419 node_conditions.go:123] node cpu capacity is 2
	I0910 19:20:50.247994   79419 node_conditions.go:105] duration metric: took 2.258795ms to run NodePressure ...
	I0910 19:20:50.248008   79419 start.go:241] waiting for startup goroutines ...
	I0910 19:20:50.329951   79419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:20:50.333140   79419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:20:50.333161   79419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:20:50.351135   79419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:20:50.371156   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0910 19:20:50.371177   79419 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0910 19:20:50.407417   79419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:20:50.407442   79419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:20:50.444963   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0910 19:20:50.444988   79419 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0910 19:20:50.486270   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0910 19:20:50.486298   79419 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0910 19:20:50.501583   79419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:20:50.501614   79419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:20:50.544164   79419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:20:50.577186   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0910 19:20:50.577210   79419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0910 19:20:50.618757   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0910 19:20:50.618783   79419 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0910 19:20:50.659739   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0910 19:20:50.659775   79419 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0910 19:20:50.729042   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0910 19:20:50.729068   79419 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0910 19:20:50.760881   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0910 19:20:50.760906   79419 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0910 19:20:50.781811   79419 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0910 19:20:50.781833   79419 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0910 19:20:50.819526   79419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0910 19:20:50.873869   79419 main.go:141] libmachine: Making call to close driver server
	I0910 19:20:50.873901   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Close
	I0910 19:20:50.874144   79419 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:20:50.874165   79419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:20:50.874174   79419 main.go:141] libmachine: Making call to close driver server
	I0910 19:20:50.874182   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Close
	I0910 19:20:50.874424   79419 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:20:50.874449   79419 main.go:141] libmachine: (newest-cni-374465) DBG | Closing plugin on server side
	I0910 19:20:50.874453   79419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:20:50.880925   79419 main.go:141] libmachine: Making call to close driver server
	I0910 19:20:50.880946   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Close
	I0910 19:20:50.881272   79419 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:20:50.881280   79419 main.go:141] libmachine: (newest-cni-374465) DBG | Closing plugin on server side
	I0910 19:20:50.881289   79419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:20:52.273124   79419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.921948432s)
	I0910 19:20:52.273186   79419 main.go:141] libmachine: Making call to close driver server
	I0910 19:20:52.273202   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Close
	I0910 19:20:52.273264   79419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.729066171s)
	I0910 19:20:52.273311   79419 main.go:141] libmachine: Making call to close driver server
	I0910 19:20:52.273331   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Close
	I0910 19:20:52.273500   79419 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:20:52.273533   79419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:20:52.273543   79419 main.go:141] libmachine: Making call to close driver server
	I0910 19:20:52.273551   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Close
	I0910 19:20:52.273621   79419 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:20:52.273629   79419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:20:52.273673   79419 main.go:141] libmachine: Making call to close driver server
	I0910 19:20:52.273687   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Close
	I0910 19:20:52.273817   79419 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:20:52.273836   79419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:20:52.275039   79419 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:20:52.275057   79419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:20:52.275067   79419 addons.go:475] Verifying addon metrics-server=true in "newest-cni-374465"
	I0910 19:20:52.275044   79419 main.go:141] libmachine: (newest-cni-374465) DBG | Closing plugin on server side
	I0910 19:20:52.526439   79419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.706869672s)
	I0910 19:20:52.526496   79419 main.go:141] libmachine: Making call to close driver server
	I0910 19:20:52.526514   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Close
	I0910 19:20:52.526839   79419 main.go:141] libmachine: (newest-cni-374465) DBG | Closing plugin on server side
	I0910 19:20:52.526858   79419 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:20:52.526872   79419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:20:52.526889   79419 main.go:141] libmachine: Making call to close driver server
	I0910 19:20:52.526897   79419 main.go:141] libmachine: (newest-cni-374465) Calling .Close
	I0910 19:20:52.527099   79419 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:20:52.527111   79419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:20:52.528992   79419 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-374465 addons enable metrics-server
	
	I0910 19:20:52.530355   79419 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0910 19:20:52.531470   79419 addons.go:510] duration metric: took 2.583698253s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0910 19:20:52.531499   79419 start.go:246] waiting for cluster config update ...
	I0910 19:20:52.531514   79419 start.go:255] writing updated cluster config ...
	I0910 19:20:52.531770   79419 ssh_runner.go:195] Run: rm -f paused
	I0910 19:20:52.578543   79419 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:20:52.580089   79419 out.go:177] * Done! kubectl is now configured to use "newest-cni-374465" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.319371328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996067319344473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a34cbfc-6988-4b41-b7e8-3157b0b6a77c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.319911672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7d51bd9-bb75-43ad-badf-89f7eaa44663 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.319969659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7d51bd9-bb75-43ad-badf-89f7eaa44663 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.320165851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994813631660776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d45d79d8703a5fc2a62839ad8bb6d496ce08997cc5153453c5e9b7a59a1364,PodSandboxId:01ef94f4f5f14ac6fccd5857d26eb00e16c4ead3103026124601c7169eadb226,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994792626440686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0a8517a-170a-406e-89f5-7cc376bb0908,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100,PodSandboxId:508fb9d46dc56e54b23345f1a393f3152cddc61eb4a413035dee2892a6628d6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994790408117005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nq9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994782727181976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27,PodSandboxId:a1be56467d27e7d8e241b79081cf999e6bf06801b77512fefae22b50774058c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994782711889420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t8r9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca739fc-0169-433b-85f1
-17bf3ab538cb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade,PodSandboxId:07abf1a8ad095596d0304c9d02d6e49d826aa0cf9dbc2685801b579782a3f18d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994779085118784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a8f55c3c023cbb2065ea0b24444a9d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a,PodSandboxId:4691d75c717c5e7b65e5cbf439358cf50e21cab9b3177ce29aa134e2008bf0df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994779041992941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58ccf7e6cfdbd0ab779aad78dd3e581,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20,PodSandboxId:22e8260e37ec3ec52a162bd457c37fff320c66bc38cd18190f8f34fd2dabbc6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994779011381611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98220ab07d6f1e726ce95e161182b
884,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68,PodSandboxId:c48af6846c37e9ec88371a94931dac7b050b33b056393e008158cb0ecf7657d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994778933693513,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd88d683448b1776d4a04c84b404bf6
8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7d51bd9-bb75-43ad-badf-89f7eaa44663 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.358047438Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f02b229f-16ed-4477-be4f-04f0ae7b4fae name=/runtime.v1.RuntimeService/Version
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.358155468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f02b229f-16ed-4477-be4f-04f0ae7b4fae name=/runtime.v1.RuntimeService/Version
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.359962804Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb42140c-e98f-4cb1-a5ee-a419b221481a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.360369928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996067360348392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb42140c-e98f-4cb1-a5ee-a419b221481a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.361042595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38ea9be8-c633-49c0-8fe5-b32ae0d03257 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.361093285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38ea9be8-c633-49c0-8fe5-b32ae0d03257 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.361290913Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994813631660776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d45d79d8703a5fc2a62839ad8bb6d496ce08997cc5153453c5e9b7a59a1364,PodSandboxId:01ef94f4f5f14ac6fccd5857d26eb00e16c4ead3103026124601c7169eadb226,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994792626440686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0a8517a-170a-406e-89f5-7cc376bb0908,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100,PodSandboxId:508fb9d46dc56e54b23345f1a393f3152cddc61eb4a413035dee2892a6628d6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994790408117005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nq9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994782727181976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27,PodSandboxId:a1be56467d27e7d8e241b79081cf999e6bf06801b77512fefae22b50774058c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994782711889420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t8r9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca739fc-0169-433b-85f1
-17bf3ab538cb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade,PodSandboxId:07abf1a8ad095596d0304c9d02d6e49d826aa0cf9dbc2685801b579782a3f18d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994779085118784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a8f55c3c023cbb2065ea0b24444a9d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a,PodSandboxId:4691d75c717c5e7b65e5cbf439358cf50e21cab9b3177ce29aa134e2008bf0df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994779041992941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58ccf7e6cfdbd0ab779aad78dd3e581,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20,PodSandboxId:22e8260e37ec3ec52a162bd457c37fff320c66bc38cd18190f8f34fd2dabbc6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994779011381611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98220ab07d6f1e726ce95e161182b
884,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68,PodSandboxId:c48af6846c37e9ec88371a94931dac7b050b33b056393e008158cb0ecf7657d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994778933693513,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd88d683448b1776d4a04c84b404bf6
8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38ea9be8-c633-49c0-8fe5-b32ae0d03257 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.399669945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd20edc8-c5b8-4271-b641-c75021f89b63 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.399742685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd20edc8-c5b8-4271-b641-c75021f89b63 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.401042949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc573ca7-2b6f-483e-8daa-3d5c8bb6041b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.401425385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996067401403342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc573ca7-2b6f-483e-8daa-3d5c8bb6041b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.402117042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ddadbaf-c0a2-4286-af70-e1db558faa49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.402173042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ddadbaf-c0a2-4286-af70-e1db558faa49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.402394807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994813631660776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d45d79d8703a5fc2a62839ad8bb6d496ce08997cc5153453c5e9b7a59a1364,PodSandboxId:01ef94f4f5f14ac6fccd5857d26eb00e16c4ead3103026124601c7169eadb226,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994792626440686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0a8517a-170a-406e-89f5-7cc376bb0908,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100,PodSandboxId:508fb9d46dc56e54b23345f1a393f3152cddc61eb4a413035dee2892a6628d6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994790408117005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nq9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994782727181976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27,PodSandboxId:a1be56467d27e7d8e241b79081cf999e6bf06801b77512fefae22b50774058c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994782711889420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t8r9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca739fc-0169-433b-85f1
-17bf3ab538cb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade,PodSandboxId:07abf1a8ad095596d0304c9d02d6e49d826aa0cf9dbc2685801b579782a3f18d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994779085118784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a8f55c3c023cbb2065ea0b24444a9d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a,PodSandboxId:4691d75c717c5e7b65e5cbf439358cf50e21cab9b3177ce29aa134e2008bf0df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994779041992941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58ccf7e6cfdbd0ab779aad78dd3e581,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20,PodSandboxId:22e8260e37ec3ec52a162bd457c37fff320c66bc38cd18190f8f34fd2dabbc6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994779011381611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98220ab07d6f1e726ce95e161182b
884,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68,PodSandboxId:c48af6846c37e9ec88371a94931dac7b050b33b056393e008158cb0ecf7657d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994778933693513,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd88d683448b1776d4a04c84b404bf6
8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ddadbaf-c0a2-4286-af70-e1db558faa49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.434069314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a56d2270-8fc0-49d5-a323-a34fa9a54755 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.434137789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a56d2270-8fc0-49d5-a323-a34fa9a54755 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.435484913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fe60772-f4c2-43f6-937f-3a7767d3fd30 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.435989320Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996067435966765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fe60772-f4c2-43f6-937f-3a7767d3fd30 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.436424208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6fa8ed0-1232-4693-94f3-fb9b1f2734aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.436506548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6fa8ed0-1232-4693-94f3-fb9b1f2734aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:21:07 default-k8s-diff-port-557504 crio[714]: time="2024-09-10 19:21:07.436694283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994813631660776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46d45d79d8703a5fc2a62839ad8bb6d496ce08997cc5153453c5e9b7a59a1364,PodSandboxId:01ef94f4f5f14ac6fccd5857d26eb00e16c4ead3103026124601c7169eadb226,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994792626440686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0a8517a-170a-406e-89f5-7cc376bb0908,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100,PodSandboxId:508fb9d46dc56e54b23345f1a393f3152cddc61eb4a413035dee2892a6628d6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994790408117005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nq9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9,PodSandboxId:d28c5f4f4a3780603b92a3af9801be921a308a87cb31a5b73d3f6b6c41de17fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994782727181976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7536d42b-90f4-44de-a7ba-652f8e535304,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27,PodSandboxId:a1be56467d27e7d8e241b79081cf999e6bf06801b77512fefae22b50774058c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994782711889420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t8r9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca739fc-0169-433b-85f1
-17bf3ab538cb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade,PodSandboxId:07abf1a8ad095596d0304c9d02d6e49d826aa0cf9dbc2685801b579782a3f18d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994779085118784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a8f55c3c023cbb2065ea0b24444a9d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a,PodSandboxId:4691d75c717c5e7b65e5cbf439358cf50e21cab9b3177ce29aa134e2008bf0df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994779041992941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a58ccf7e6cfdbd0ab779aad78dd3e581,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20,PodSandboxId:22e8260e37ec3ec52a162bd457c37fff320c66bc38cd18190f8f34fd2dabbc6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994779011381611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98220ab07d6f1e726ce95e161182b
884,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68,PodSandboxId:c48af6846c37e9ec88371a94931dac7b050b33b056393e008158cb0ecf7657d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994778933693513,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-557504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd88d683448b1776d4a04c84b404bf6
8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6fa8ed0-1232-4693-94f3-fb9b1f2734aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b3e0e8df9acc9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   d28c5f4f4a378       storage-provisioner
	46d45d79d8703       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   01ef94f4f5f14       busybox
	24f8e4dfaa105       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   508fb9d46dc56       coredns-6f6b679f8f-nq9fl
	173c9f8505ac0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   d28c5f4f4a378       storage-provisioner
	48c0a781fcf34       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      21 minutes ago      Running             kube-proxy                1                   a1be56467d27e       kube-proxy-4t8r9
	f3db63297412d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   07abf1a8ad095       etcd-default-k8s-diff-port-557504
	1e3f86c05b5ff       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      21 minutes ago      Running             kube-apiserver            1                   4691d75c717c5       kube-apiserver-default-k8s-diff-port-557504
	55624c2cb31c2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      21 minutes ago      Running             kube-controller-manager   1                   22e8260e37ec3       kube-controller-manager-default-k8s-diff-port-557504
	1a520241ca117       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      21 minutes ago      Running             kube-scheduler            1                   c48af6846c37e       kube-scheduler-default-k8s-diff-port-557504
	
	
	==> coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38751 - 32244 "HINFO IN 6012458017028077328.2712800172143965829. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00959273s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-557504
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-557504
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=default-k8s-diff-port-557504
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_51_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:51:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-557504
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:20:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:20:34 +0000   Tue, 10 Sep 2024 18:51:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:20:34 +0000   Tue, 10 Sep 2024 18:51:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:20:34 +0000   Tue, 10 Sep 2024 18:51:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:20:34 +0000   Tue, 10 Sep 2024 18:59:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.54
	  Hostname:    default-k8s-diff-port-557504
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cab7798f9fc3461b8abf4234670c0a64
	  System UUID:                cab7798f-9fc3-461b-8abf-4234670c0a64
	  Boot ID:                    0813731b-96be-409b-9746-de10369ef99f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 coredns-6f6b679f8f-nq9fl                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-557504                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-557504             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-557504    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-4t8r9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-557504             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-4sfwg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-557504 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-557504 event: Registered Node default-k8s-diff-port-557504 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-557504 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-557504 event: Registered Node default-k8s-diff-port-557504 in Controller
	
	
	==> dmesg <==
	[Sep10 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050803] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039782] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.805312] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.620469] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.901120] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.081591] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080108] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.201989] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.127337] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.320225] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.600724] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +0.073364] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.937239] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +4.582327] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.523630] systemd-fstab-generator[1553]: Ignoring "noauto" option for root device
	[  +3.231118] kauditd_printk_skb: 64 callbacks suppressed
	[Sep10 19:00] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] <==
	{"level":"warn","ts":"2024-09-10T19:00:20.189159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.383808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T19:00:20.189632Z","caller":"traceutil/trace.go:171","msg":"trace[590178216] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:593; }","duration":"307.854228ms","start":"2024-09-10T19:00:19.881769Z","end":"2024-09-10T19:00:20.189623Z","steps":["trace[590178216] 'agreement among raft nodes before linearized reading'  (duration: 307.370309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:00:20.189756Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T19:00:19.881728Z","time spent":"308.01925ms","remote":"127.0.0.1:53530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-10T19:00:20.189065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-10T19:00:19.708495Z","time spent":"480.561086ms","remote":"127.0.0.1:53724","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4416,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4sfwg\" "}
	{"level":"info","ts":"2024-09-10T19:09:40.714196Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":818}
	{"level":"info","ts":"2024-09-10T19:09:40.724236Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":818,"took":"9.752411ms","hash":1755826084,"current-db-size-bytes":2576384,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2576384,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-09-10T19:09:40.724279Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1755826084,"revision":818,"compact-revision":-1}
	{"level":"info","ts":"2024-09-10T19:14:40.722050Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1060}
	{"level":"info","ts":"2024-09-10T19:14:40.726552Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1060,"took":"3.721022ms","hash":4159519076,"current-db-size-bytes":2576384,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-10T19:14:40.726637Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4159519076,"revision":1060,"compact-revision":818}
	{"level":"info","ts":"2024-09-10T19:19:40.729180Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1303}
	{"level":"info","ts":"2024-09-10T19:19:40.734052Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1303,"took":"4.511538ms","hash":930545521,"current-db-size-bytes":2576384,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-10T19:19:40.734120Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":930545521,"revision":1303,"compact-revision":1060}
	{"level":"warn","ts":"2024-09-10T19:20:43.558902Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.467339ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14269534324033570768 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.54\" mod_revision:1589 > success:<request_put:<key:\"/registry/masterleases/192.168.72.54\" value_size:66 lease:5046162287178794958 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.54\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-10T19:20:43.559234Z","caller":"traceutil/trace.go:171","msg":"trace[500082085] linearizableReadLoop","detail":"{readStateIndex:1889; appliedIndex:1888; }","duration":"210.769859ms","start":"2024-09-10T19:20:43.348432Z","end":"2024-09-10T19:20:43.559202Z","steps":["trace[500082085] 'read index received'  (duration: 102.69723ms)","trace[500082085] 'applied index is now lower than readState.Index'  (duration: 108.071558ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T19:20:43.559295Z","caller":"traceutil/trace.go:171","msg":"trace[1014811027] transaction","detail":"{read_only:false; response_revision:1598; number_of_response:1; }","duration":"215.936354ms","start":"2024-09-10T19:20:43.343330Z","end":"2024-09-10T19:20:43.559267Z","steps":["trace[1014811027] 'process raft request'  (duration: 107.844795ms)","trace[1014811027] 'compare'  (duration: 107.347055ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T19:20:43.559482Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.037794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-09-10T19:20:43.559541Z","caller":"traceutil/trace.go:171","msg":"trace[2074766734] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1598; }","duration":"211.104653ms","start":"2024-09-10T19:20:43.348428Z","end":"2024-09-10T19:20:43.559532Z","steps":["trace[2074766734] 'agreement among raft nodes before linearized reading'  (duration: 210.907607ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:20:43.559690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.899567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T19:20:43.559754Z","caller":"traceutil/trace.go:171","msg":"trace[1777456911] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1598; }","duration":"141.968759ms","start":"2024-09-10T19:20:43.417777Z","end":"2024-09-10T19:20:43.559746Z","steps":["trace[1777456911] 'agreement among raft nodes before linearized reading'  (duration: 141.879863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T19:20:43.785123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.417826ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14269534324033570772 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1597 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-10T19:20:43.785283Z","caller":"traceutil/trace.go:171","msg":"trace[208916807] linearizableReadLoop","detail":"{readStateIndex:1890; appliedIndex:1889; }","duration":"220.911675ms","start":"2024-09-10T19:20:43.564359Z","end":"2024-09-10T19:20:43.785271Z","steps":["trace[208916807] 'read index received'  (duration: 106.253954ms)","trace[208916807] 'applied index is now lower than readState.Index'  (duration: 114.656641ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-10T19:20:43.785378Z","caller":"traceutil/trace.go:171","msg":"trace[327102928] transaction","detail":"{read_only:false; response_revision:1599; number_of_response:1; }","duration":"221.119419ms","start":"2024-09-10T19:20:43.564251Z","end":"2024-09-10T19:20:43.785371Z","steps":["trace[327102928] 'process raft request'  (duration: 106.401858ms)","trace[327102928] 'compare'  (duration: 114.308272ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T19:20:43.785938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.570963ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-09-10T19:20:43.786004Z","caller":"traceutil/trace.go:171","msg":"trace[371358760] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:1599; }","duration":"221.640885ms","start":"2024-09-10T19:20:43.564356Z","end":"2024-09-10T19:20:43.785997Z","steps":["trace[371358760] 'agreement among raft nodes before linearized reading'  (duration: 221.448308ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:21:07 up 21 min,  0 users,  load average: 0.43, 0.14, 0.09
	Linux default-k8s-diff-port-557504 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] <==
	I0910 19:17:43.057934       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:17:43.057980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:19:42.057371       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:19:42.057725       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0910 19:19:43.060163       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:19:43.060387       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0910 19:19:43.060970       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:19:43.061418       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0910 19:19:43.061652       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:19:43.062928       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:20:43.062921       1 handler_proxy.go:99] no RequestInfo found in the context
	W0910 19:20:43.063003       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:20:43.063087       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0910 19:20:43.063112       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0910 19:20:43.065023       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:20:43.065071       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] <==
	E0910 19:15:45.748242       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:15:46.326159       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:16:05.387483       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="266.82µs"
	E0910 19:16:15.754600       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:16:16.335050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:16:18.384392       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="122.133µs"
	E0910 19:16:45.761545       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:16:46.344931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:17:15.768368       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:17:16.353228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:17:45.774566       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:17:46.359949       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:18:15.781342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:18:16.372775       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:18:45.789761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:18:46.381763       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:19:15.797699       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:19:16.389593       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:19:45.805079       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:19:46.397768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:20:15.812344       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:20:16.406728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:20:34.121901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-557504"
	E0910 19:20:45.820294       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:20:46.414293       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 18:59:42.978940       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 18:59:42.990213       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.54"]
	E0910 18:59:42.990419       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 18:59:43.037420       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 18:59:43.037566       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 18:59:43.037620       1 server_linux.go:169] "Using iptables Proxier"
	I0910 18:59:43.040566       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 18:59:43.041123       1 server.go:483] "Version info" version="v1.31.0"
	I0910 18:59:43.041319       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:59:43.043262       1 config.go:197] "Starting service config controller"
	I0910 18:59:43.043364       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 18:59:43.043425       1 config.go:104] "Starting endpoint slice config controller"
	I0910 18:59:43.043495       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 18:59:43.044261       1 config.go:326] "Starting node config controller"
	I0910 18:59:43.044304       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 18:59:43.143925       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 18:59:43.143996       1 shared_informer.go:320] Caches are synced for service config
	I0910 18:59:43.144563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] <==
	I0910 18:59:40.302269       1 serving.go:386] Generated self-signed cert in-memory
	W0910 18:59:42.000353       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 18:59:42.000397       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 18:59:42.000407       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 18:59:42.000413       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 18:59:42.079141       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 18:59:42.079185       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 18:59:42.087791       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 18:59:42.088017       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 18:59:42.088055       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 18:59:42.088069       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 18:59:42.188327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 19:20:08 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:08.669641     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996008669288232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:08 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:08.670121     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996008669288232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:10 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:10.369089     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:20:18 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:18.672175     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996018671533250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:18 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:18.672366     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996018671533250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:23 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:23.368724     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:20:28 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:28.674552     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996028674130744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:28 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:28.674983     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996028674130744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:34 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:34.368993     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:20:38 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:38.390514     925 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:20:38 default-k8s-diff-port-557504 kubelet[925]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:20:38 default-k8s-diff-port-557504 kubelet[925]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:20:38 default-k8s-diff-port-557504 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:20:38 default-k8s-diff-port-557504 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:20:38 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:38.679004     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996038678254451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:38 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:38.679029     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996038678254451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:48 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:48.681911     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996048681340015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:48 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:48.681954     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996048681340015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:49 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:49.369762     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	Sep 10 19:20:58 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:58.684270     925 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996058683520950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:58 default-k8s-diff-port-557504 kubelet[925]: E0910 19:20:58.684555     925 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996058683520950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:21:01 default-k8s-diff-port-557504 kubelet[925]: E0910 19:21:01.378969     925 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 10 19:21:01 default-k8s-diff-port-557504 kubelet[925]: E0910 19:21:01.379314     925 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 10 19:21:01 default-k8s-diff-port-557504 kubelet[925]: E0910 19:21:01.379917     925 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhr2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-4sfwg_kube-system(6b5d0161-6a62-4752-b714-ada6b3772956): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 10 19:21:01 default-k8s-diff-port-557504 kubelet[925]: E0910 19:21:01.381347     925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-4sfwg" podUID="6b5d0161-6a62-4752-b714-ada6b3772956"
	
	
	==> storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] <==
	I0910 18:59:42.861462       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0910 19:00:12.869353       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] <==
	I0910 19:00:13.766952       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 19:00:13.778228       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 19:00:13.778416       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 19:00:31.183145       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 19:00:31.184696       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-557504_f49bae0e-e086-4bed-9cbf-26a1021824f1!
	I0910 19:00:31.184900       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79717468-62c3-48f1-b324-f2d2880b2de2", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-557504_f49bae0e-e086-4bed-9cbf-26a1021824f1 became leader
	I0910 19:00:31.285422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-557504_f49bae0e-e086-4bed-9cbf-26a1021824f1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-557504 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4sfwg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-557504 describe pod metrics-server-6867b74b74-4sfwg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-557504 describe pod metrics-server-6867b74b74-4sfwg: exit status 1 (60.789383ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4sfwg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-557504 describe pod metrics-server-6867b74b74-4sfwg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (477.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (368.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-347802 -n no-preload-347802
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-10 19:19:51.592569972 +0000 UTC m=+6651.416344725
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-347802 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-347802 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.289µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-347802 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-347802 -n no-preload-347802
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-347802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-347802 logs -n 25: (1.301453337s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo find                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo crio                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-642043                                       | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-186737 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | disable-driver-mounts-186737                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-836868            | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-347802             | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-557504  | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-836868                 | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-432422        | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-347802                  | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-557504       | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-432422             | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 19:19 UTC | 10 Sep 24 19:19 UTC |
	| start   | -p newest-cni-374465 --memory=2200 --alsologtostderr   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:19 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 19:19:17
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 19:19:17.115723   78671 out.go:345] Setting OutFile to fd 1 ...
	I0910 19:19:17.115943   78671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:19:17.115952   78671 out.go:358] Setting ErrFile to fd 2...
	I0910 19:19:17.115957   78671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:19:17.116117   78671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 19:19:17.116686   78671 out.go:352] Setting JSON to false
	I0910 19:19:17.117650   78671 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7309,"bootTime":1725988648,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 19:19:17.117712   78671 start.go:139] virtualization: kvm guest
	I0910 19:19:17.120197   78671 out.go:177] * [newest-cni-374465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 19:19:17.121560   78671 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 19:19:17.121582   78671 notify.go:220] Checking for updates...
	I0910 19:19:17.123940   78671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 19:19:17.125332   78671 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:19:17.126516   78671 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 19:19:17.127737   78671 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 19:19:17.128952   78671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 19:19:17.130566   78671 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:19:17.130696   78671 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:19:17.130809   78671 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:19:17.130897   78671 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 19:19:17.166597   78671 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 19:19:17.167785   78671 start.go:297] selected driver: kvm2
	I0910 19:19:17.167804   78671 start.go:901] validating driver "kvm2" against <nil>
	I0910 19:19:17.167824   78671 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 19:19:17.168613   78671 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 19:19:17.168717   78671 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 19:19:17.184387   78671 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 19:19:17.184457   78671 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0910 19:19:17.184494   78671 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0910 19:19:17.184847   78671 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0910 19:19:17.184912   78671 cni.go:84] Creating CNI manager for ""
	I0910 19:19:17.184924   78671 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:19:17.184933   78671 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 19:19:17.184982   78671 start.go:340] cluster config:
	{Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:19:17.185119   78671 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 19:19:17.186973   78671 out.go:177] * Starting "newest-cni-374465" primary control-plane node in "newest-cni-374465" cluster
	I0910 19:19:17.188155   78671 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:19:17.188176   78671 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 19:19:17.188190   78671 cache.go:56] Caching tarball of preloaded images
	I0910 19:19:17.188244   78671 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 19:19:17.188254   78671 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 19:19:17.188335   78671 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/config.json ...
	I0910 19:19:17.188351   78671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/config.json: {Name:mk41b860a32b0f6f7c5f59466767cf49c8c0c002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:19:17.188470   78671 start.go:360] acquireMachinesLock for newest-cni-374465: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:19:17.188495   78671 start.go:364] duration metric: took 13.827µs to acquireMachinesLock for "newest-cni-374465"
	I0910 19:19:17.188510   78671 start.go:93] Provisioning new machine with config: &{Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:19:17.188557   78671 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 19:19:17.190695   78671 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 19:19:17.190813   78671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:19:17.190845   78671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:19:17.205251   78671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I0910 19:19:17.205682   78671 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:19:17.206148   78671 main.go:141] libmachine: Using API Version  1
	I0910 19:19:17.206168   78671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:19:17.206470   78671 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:19:17.206646   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetMachineName
	I0910 19:19:17.206767   78671 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:19:17.206915   78671 start.go:159] libmachine.API.Create for "newest-cni-374465" (driver="kvm2")
	I0910 19:19:17.206940   78671 client.go:168] LocalClient.Create starting
	I0910 19:19:17.206972   78671 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem
	I0910 19:19:17.207001   78671 main.go:141] libmachine: Decoding PEM data...
	I0910 19:19:17.207017   78671 main.go:141] libmachine: Parsing certificate...
	I0910 19:19:17.207074   78671 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem
	I0910 19:19:17.207091   78671 main.go:141] libmachine: Decoding PEM data...
	I0910 19:19:17.207102   78671 main.go:141] libmachine: Parsing certificate...
	I0910 19:19:17.207118   78671 main.go:141] libmachine: Running pre-create checks...
	I0910 19:19:17.207127   78671 main.go:141] libmachine: (newest-cni-374465) Calling .PreCreateCheck
	I0910 19:19:17.207414   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetConfigRaw
	I0910 19:19:17.207769   78671 main.go:141] libmachine: Creating machine...
	I0910 19:19:17.207782   78671 main.go:141] libmachine: (newest-cni-374465) Calling .Create
	I0910 19:19:17.207871   78671 main.go:141] libmachine: (newest-cni-374465) Creating KVM machine...
	I0910 19:19:17.208981   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found existing default KVM network
	I0910 19:19:17.210114   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:17.209967   78695 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c0:f6:1b} reservation:<nil>}
	I0910 19:19:17.210875   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:17.210823   78695 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6b:93:4a} reservation:<nil>}
	I0910 19:19:17.211980   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:17.211907   78695 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e68a0}
	I0910 19:19:17.211996   78671 main.go:141] libmachine: (newest-cni-374465) DBG | created network xml: 
	I0910 19:19:17.212004   78671 main.go:141] libmachine: (newest-cni-374465) DBG | <network>
	I0910 19:19:17.212009   78671 main.go:141] libmachine: (newest-cni-374465) DBG |   <name>mk-newest-cni-374465</name>
	I0910 19:19:17.212017   78671 main.go:141] libmachine: (newest-cni-374465) DBG |   <dns enable='no'/>
	I0910 19:19:17.212022   78671 main.go:141] libmachine: (newest-cni-374465) DBG |   
	I0910 19:19:17.212049   78671 main.go:141] libmachine: (newest-cni-374465) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0910 19:19:17.212070   78671 main.go:141] libmachine: (newest-cni-374465) DBG |     <dhcp>
	I0910 19:19:17.212087   78671 main.go:141] libmachine: (newest-cni-374465) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0910 19:19:17.212098   78671 main.go:141] libmachine: (newest-cni-374465) DBG |     </dhcp>
	I0910 19:19:17.212111   78671 main.go:141] libmachine: (newest-cni-374465) DBG |   </ip>
	I0910 19:19:17.212136   78671 main.go:141] libmachine: (newest-cni-374465) DBG |   
	I0910 19:19:17.212148   78671 main.go:141] libmachine: (newest-cni-374465) DBG | </network>
	I0910 19:19:17.212157   78671 main.go:141] libmachine: (newest-cni-374465) DBG | 
	I0910 19:19:17.217125   78671 main.go:141] libmachine: (newest-cni-374465) DBG | trying to create private KVM network mk-newest-cni-374465 192.168.61.0/24...
	I0910 19:19:17.288018   78671 main.go:141] libmachine: (newest-cni-374465) DBG | private KVM network mk-newest-cni-374465 192.168.61.0/24 created
	I0910 19:19:17.288136   78671 main.go:141] libmachine: (newest-cni-374465) Setting up store path in /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465 ...
	I0910 19:19:17.288169   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:17.287987   78695 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 19:19:17.288212   78671 main.go:141] libmachine: (newest-cni-374465) Building disk image from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 19:19:17.288234   78671 main.go:141] libmachine: (newest-cni-374465) Downloading /home/jenkins/minikube-integration/19598-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 19:19:17.526733   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:17.526622   78695 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa...
	I0910 19:19:17.755433   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:17.755313   78695 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/newest-cni-374465.rawdisk...
	I0910 19:19:17.755476   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Writing magic tar header
	I0910 19:19:17.755493   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Writing SSH key tar header
	I0910 19:19:17.755509   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:17.755462   78695 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465 ...
	I0910 19:19:17.755641   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465
	I0910 19:19:17.755690   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube/machines
	I0910 19:19:17.755705   78671 main.go:141] libmachine: (newest-cni-374465) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465 (perms=drwx------)
	I0910 19:19:17.755722   78671 main.go:141] libmachine: (newest-cni-374465) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube/machines (perms=drwxr-xr-x)
	I0910 19:19:17.755734   78671 main.go:141] libmachine: (newest-cni-374465) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973/.minikube (perms=drwxr-xr-x)
	I0910 19:19:17.755751   78671 main.go:141] libmachine: (newest-cni-374465) Setting executable bit set on /home/jenkins/minikube-integration/19598-5973 (perms=drwxrwxr-x)
	I0910 19:19:17.755765   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 19:19:17.755777   78671 main.go:141] libmachine: (newest-cni-374465) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 19:19:17.755788   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5973
	I0910 19:19:17.755800   78671 main.go:141] libmachine: (newest-cni-374465) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 19:19:17.755820   78671 main.go:141] libmachine: (newest-cni-374465) Creating domain...
	I0910 19:19:17.755835   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 19:19:17.755846   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Checking permissions on dir: /home/jenkins
	I0910 19:19:17.755860   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Checking permissions on dir: /home
	I0910 19:19:17.755875   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Skipping /home - not owner
	I0910 19:19:17.757205   78671 main.go:141] libmachine: (newest-cni-374465) define libvirt domain using xml: 
	I0910 19:19:17.757228   78671 main.go:141] libmachine: (newest-cni-374465) <domain type='kvm'>
	I0910 19:19:17.757239   78671 main.go:141] libmachine: (newest-cni-374465)   <name>newest-cni-374465</name>
	I0910 19:19:17.757247   78671 main.go:141] libmachine: (newest-cni-374465)   <memory unit='MiB'>2200</memory>
	I0910 19:19:17.757256   78671 main.go:141] libmachine: (newest-cni-374465)   <vcpu>2</vcpu>
	I0910 19:19:17.757264   78671 main.go:141] libmachine: (newest-cni-374465)   <features>
	I0910 19:19:17.757278   78671 main.go:141] libmachine: (newest-cni-374465)     <acpi/>
	I0910 19:19:17.757293   78671 main.go:141] libmachine: (newest-cni-374465)     <apic/>
	I0910 19:19:17.757333   78671 main.go:141] libmachine: (newest-cni-374465)     <pae/>
	I0910 19:19:17.757351   78671 main.go:141] libmachine: (newest-cni-374465)     
	I0910 19:19:17.757364   78671 main.go:141] libmachine: (newest-cni-374465)   </features>
	I0910 19:19:17.757375   78671 main.go:141] libmachine: (newest-cni-374465)   <cpu mode='host-passthrough'>
	I0910 19:19:17.757386   78671 main.go:141] libmachine: (newest-cni-374465)   
	I0910 19:19:17.757395   78671 main.go:141] libmachine: (newest-cni-374465)   </cpu>
	I0910 19:19:17.757404   78671 main.go:141] libmachine: (newest-cni-374465)   <os>
	I0910 19:19:17.757415   78671 main.go:141] libmachine: (newest-cni-374465)     <type>hvm</type>
	I0910 19:19:17.757423   78671 main.go:141] libmachine: (newest-cni-374465)     <boot dev='cdrom'/>
	I0910 19:19:17.757434   78671 main.go:141] libmachine: (newest-cni-374465)     <boot dev='hd'/>
	I0910 19:19:17.757446   78671 main.go:141] libmachine: (newest-cni-374465)     <bootmenu enable='no'/>
	I0910 19:19:17.757455   78671 main.go:141] libmachine: (newest-cni-374465)   </os>
	I0910 19:19:17.757464   78671 main.go:141] libmachine: (newest-cni-374465)   <devices>
	I0910 19:19:17.757475   78671 main.go:141] libmachine: (newest-cni-374465)     <disk type='file' device='cdrom'>
	I0910 19:19:17.757492   78671 main.go:141] libmachine: (newest-cni-374465)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/boot2docker.iso'/>
	I0910 19:19:17.757502   78671 main.go:141] libmachine: (newest-cni-374465)       <target dev='hdc' bus='scsi'/>
	I0910 19:19:17.757514   78671 main.go:141] libmachine: (newest-cni-374465)       <readonly/>
	I0910 19:19:17.757524   78671 main.go:141] libmachine: (newest-cni-374465)     </disk>
	I0910 19:19:17.757534   78671 main.go:141] libmachine: (newest-cni-374465)     <disk type='file' device='disk'>
	I0910 19:19:17.757545   78671 main.go:141] libmachine: (newest-cni-374465)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 19:19:17.757558   78671 main.go:141] libmachine: (newest-cni-374465)       <source file='/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/newest-cni-374465.rawdisk'/>
	I0910 19:19:17.757569   78671 main.go:141] libmachine: (newest-cni-374465)       <target dev='hda' bus='virtio'/>
	I0910 19:19:17.757579   78671 main.go:141] libmachine: (newest-cni-374465)     </disk>
	I0910 19:19:17.757590   78671 main.go:141] libmachine: (newest-cni-374465)     <interface type='network'>
	I0910 19:19:17.757600   78671 main.go:141] libmachine: (newest-cni-374465)       <source network='mk-newest-cni-374465'/>
	I0910 19:19:17.757611   78671 main.go:141] libmachine: (newest-cni-374465)       <model type='virtio'/>
	I0910 19:19:17.757623   78671 main.go:141] libmachine: (newest-cni-374465)     </interface>
	I0910 19:19:17.757633   78671 main.go:141] libmachine: (newest-cni-374465)     <interface type='network'>
	I0910 19:19:17.757642   78671 main.go:141] libmachine: (newest-cni-374465)       <source network='default'/>
	I0910 19:19:17.757651   78671 main.go:141] libmachine: (newest-cni-374465)       <model type='virtio'/>
	I0910 19:19:17.757660   78671 main.go:141] libmachine: (newest-cni-374465)     </interface>
	I0910 19:19:17.757681   78671 main.go:141] libmachine: (newest-cni-374465)     <serial type='pty'>
	I0910 19:19:17.757693   78671 main.go:141] libmachine: (newest-cni-374465)       <target port='0'/>
	I0910 19:19:17.757703   78671 main.go:141] libmachine: (newest-cni-374465)     </serial>
	I0910 19:19:17.757711   78671 main.go:141] libmachine: (newest-cni-374465)     <console type='pty'>
	I0910 19:19:17.757721   78671 main.go:141] libmachine: (newest-cni-374465)       <target type='serial' port='0'/>
	I0910 19:19:17.757729   78671 main.go:141] libmachine: (newest-cni-374465)     </console>
	I0910 19:19:17.757741   78671 main.go:141] libmachine: (newest-cni-374465)     <rng model='virtio'>
	I0910 19:19:17.757755   78671 main.go:141] libmachine: (newest-cni-374465)       <backend model='random'>/dev/random</backend>
	I0910 19:19:17.757766   78671 main.go:141] libmachine: (newest-cni-374465)     </rng>
	I0910 19:19:17.757773   78671 main.go:141] libmachine: (newest-cni-374465)     
	I0910 19:19:17.757783   78671 main.go:141] libmachine: (newest-cni-374465)     
	I0910 19:19:17.757791   78671 main.go:141] libmachine: (newest-cni-374465)   </devices>
	I0910 19:19:17.757801   78671 main.go:141] libmachine: (newest-cni-374465) </domain>
	I0910 19:19:17.757811   78671 main.go:141] libmachine: (newest-cni-374465) 
	I0910 19:19:17.762592   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:97:33:a6 in network default
	I0910 19:19:17.763162   78671 main.go:141] libmachine: (newest-cni-374465) Ensuring networks are active...
	I0910 19:19:17.763187   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:17.763856   78671 main.go:141] libmachine: (newest-cni-374465) Ensuring network default is active
	I0910 19:19:17.764228   78671 main.go:141] libmachine: (newest-cni-374465) Ensuring network mk-newest-cni-374465 is active
	I0910 19:19:17.764797   78671 main.go:141] libmachine: (newest-cni-374465) Getting domain xml...
	I0910 19:19:17.765503   78671 main.go:141] libmachine: (newest-cni-374465) Creating domain...
	I0910 19:19:19.031418   78671 main.go:141] libmachine: (newest-cni-374465) Waiting to get IP...
	I0910 19:19:19.032350   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:19.032871   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:19.032910   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:19.032849   78695 retry.go:31] will retry after 247.732966ms: waiting for machine to come up
	I0910 19:19:19.282326   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:19.282835   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:19.282878   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:19.282800   78695 retry.go:31] will retry after 359.561323ms: waiting for machine to come up
	I0910 19:19:19.644361   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:19.644812   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:19.644847   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:19.644764   78695 retry.go:31] will retry after 406.006938ms: waiting for machine to come up
	I0910 19:19:20.052321   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:20.052716   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:20.052745   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:20.052685   78695 retry.go:31] will retry after 530.181096ms: waiting for machine to come up
	I0910 19:19:20.584184   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:20.584573   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:20.584597   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:20.584549   78695 retry.go:31] will retry after 568.552539ms: waiting for machine to come up
	I0910 19:19:21.154092   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:21.154510   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:21.154539   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:21.154469   78695 retry.go:31] will retry after 826.278517ms: waiting for machine to come up
	I0910 19:19:21.982101   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:21.982555   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:21.982585   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:21.982506   78695 retry.go:31] will retry after 786.222351ms: waiting for machine to come up
	I0910 19:19:22.769922   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:22.770339   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:22.770363   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:22.770308   78695 retry.go:31] will retry after 1.187476015s: waiting for machine to come up
	I0910 19:19:23.959655   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:23.960097   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:23.960119   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:23.960050   78695 retry.go:31] will retry after 1.565869483s: waiting for machine to come up
	I0910 19:19:25.527886   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:25.528267   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:25.528288   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:25.528227   78695 retry.go:31] will retry after 1.561405168s: waiting for machine to come up
	I0910 19:19:27.090782   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:27.091230   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:27.091256   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:27.091200   78695 retry.go:31] will retry after 2.548358671s: waiting for machine to come up
	I0910 19:19:29.642510   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:29.643075   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:29.643102   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:29.643051   78695 retry.go:31] will retry after 3.045740292s: waiting for machine to come up
	I0910 19:19:32.690135   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:32.690585   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:32.690615   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:32.690529   78695 retry.go:31] will retry after 2.872082294s: waiting for machine to come up
	I0910 19:19:35.566429   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:35.566868   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find current IP address of domain newest-cni-374465 in network mk-newest-cni-374465
	I0910 19:19:35.566896   78671 main.go:141] libmachine: (newest-cni-374465) DBG | I0910 19:19:35.566827   78695 retry.go:31] will retry after 4.033350948s: waiting for machine to come up
	I0910 19:19:39.602168   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:39.602606   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has current primary IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:39.602635   78671 main.go:141] libmachine: (newest-cni-374465) Found IP for machine: 192.168.61.46
	I0910 19:19:39.602647   78671 main.go:141] libmachine: (newest-cni-374465) Reserving static IP address...
	I0910 19:19:39.603045   78671 main.go:141] libmachine: (newest-cni-374465) DBG | unable to find host DHCP lease matching {name: "newest-cni-374465", mac: "52:54:00:03:a9:68", ip: "192.168.61.46"} in network mk-newest-cni-374465
	I0910 19:19:39.680195   78671 main.go:141] libmachine: (newest-cni-374465) Reserved static IP address: 192.168.61.46
	I0910 19:19:39.680229   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Getting to WaitForSSH function...
	I0910 19:19:39.680238   78671 main.go:141] libmachine: (newest-cni-374465) Waiting for SSH to be available...
	I0910 19:19:39.683041   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:39.683467   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:39.683496   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:39.683642   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Using SSH client type: external
	I0910 19:19:39.683668   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa (-rw-------)
	I0910 19:19:39.683713   78671 main.go:141] libmachine: (newest-cni-374465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.46 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 19:19:39.683730   78671 main.go:141] libmachine: (newest-cni-374465) DBG | About to run SSH command:
	I0910 19:19:39.683746   78671 main.go:141] libmachine: (newest-cni-374465) DBG | exit 0
	I0910 19:19:39.809000   78671 main.go:141] libmachine: (newest-cni-374465) DBG | SSH cmd err, output: <nil>: 
	I0910 19:19:39.809253   78671 main.go:141] libmachine: (newest-cni-374465) KVM machine creation complete!
	I0910 19:19:39.809606   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetConfigRaw
	I0910 19:19:39.810127   78671 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:19:39.810299   78671 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:19:39.810433   78671 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 19:19:39.810446   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetState
	I0910 19:19:39.811705   78671 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 19:19:39.811719   78671 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 19:19:39.811725   78671 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 19:19:39.811735   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:39.814140   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:39.814471   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:39.814510   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:39.814656   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:39.814818   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:39.814971   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:39.815100   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:39.815262   78671 main.go:141] libmachine: Using SSH client type: native
	I0910 19:19:39.815444   78671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:19:39.815456   78671 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 19:19:39.920413   78671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:19:39.920437   78671 main.go:141] libmachine: Detecting the provisioner...
	I0910 19:19:39.920446   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:39.923231   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:39.923526   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:39.923557   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:39.923795   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:39.923972   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:39.924151   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:39.924282   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:39.924449   78671 main.go:141] libmachine: Using SSH client type: native
	I0910 19:19:39.924629   78671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:19:39.924642   78671 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 19:19:40.033822   78671 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 19:19:40.033925   78671 main.go:141] libmachine: found compatible host: buildroot
	I0910 19:19:40.033939   78671 main.go:141] libmachine: Provisioning with buildroot...
	I0910 19:19:40.033950   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetMachineName
	I0910 19:19:40.034210   78671 buildroot.go:166] provisioning hostname "newest-cni-374465"
	I0910 19:19:40.034237   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetMachineName
	I0910 19:19:40.034427   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:40.037210   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.037618   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:40.037644   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.037713   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:40.037871   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:40.038044   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:40.038197   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:40.038396   78671 main.go:141] libmachine: Using SSH client type: native
	I0910 19:19:40.038559   78671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:19:40.038570   78671 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-374465 && echo "newest-cni-374465" | sudo tee /etc/hostname
	I0910 19:19:40.164643   78671 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374465
	
	I0910 19:19:40.164703   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:40.167559   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.167936   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:40.167975   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.168130   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:40.168303   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:40.168463   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:40.168625   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:40.168785   78671 main.go:141] libmachine: Using SSH client type: native
	I0910 19:19:40.168948   78671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:19:40.168964   78671 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-374465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-374465/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-374465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:19:40.286472   78671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:19:40.286502   78671 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 19:19:40.286524   78671 buildroot.go:174] setting up certificates
	I0910 19:19:40.286537   78671 provision.go:84] configureAuth start
	I0910 19:19:40.286551   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetMachineName
	I0910 19:19:40.286860   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetIP
	I0910 19:19:40.289563   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.290014   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:40.290041   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.290251   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:40.292505   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.292880   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:40.292921   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.292999   78671 provision.go:143] copyHostCerts
	I0910 19:19:40.293065   78671 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 19:19:40.293085   78671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 19:19:40.293162   78671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 19:19:40.293282   78671 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 19:19:40.293293   78671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 19:19:40.293338   78671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 19:19:40.293419   78671 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 19:19:40.293429   78671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 19:19:40.293465   78671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 19:19:40.293531   78671 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.newest-cni-374465 san=[127.0.0.1 192.168.61.46 localhost minikube newest-cni-374465]
	I0910 19:19:40.494028   78671 provision.go:177] copyRemoteCerts
	I0910 19:19:40.494082   78671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:19:40.494112   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:40.496590   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.496915   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:40.496940   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.497105   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:40.497292   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:40.497425   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:40.497543   78671 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:19:40.583046   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 19:19:40.606732   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 19:19:40.630160   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:19:40.654037   78671 provision.go:87] duration metric: took 367.476238ms to configureAuth
	I0910 19:19:40.654058   78671 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:19:40.654229   78671 config.go:182] Loaded profile config "newest-cni-374465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:19:40.654300   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:40.657137   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.657526   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:40.657566   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.657779   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:40.657980   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:40.658132   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:40.658267   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:40.658421   78671 main.go:141] libmachine: Using SSH client type: native
	I0910 19:19:40.658601   78671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:19:40.658616   78671 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 19:19:40.892872   78671 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 19:19:40.892896   78671 main.go:141] libmachine: Checking connection to Docker...
	I0910 19:19:40.892904   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetURL
	I0910 19:19:40.893979   78671 main.go:141] libmachine: (newest-cni-374465) DBG | Using libvirt version 6000000
	I0910 19:19:40.896133   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.896523   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:40.896550   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.896708   78671 main.go:141] libmachine: Docker is up and running!
	I0910 19:19:40.896723   78671 main.go:141] libmachine: Reticulating splines...
	I0910 19:19:40.896729   78671 client.go:171] duration metric: took 23.68978315s to LocalClient.Create
	I0910 19:19:40.896750   78671 start.go:167] duration metric: took 23.689837592s to libmachine.API.Create "newest-cni-374465"
	I0910 19:19:40.896759   78671 start.go:293] postStartSetup for "newest-cni-374465" (driver="kvm2")
	I0910 19:19:40.896768   78671 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:19:40.896783   78671 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:19:40.897016   78671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:19:40.897042   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:40.899370   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.899654   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:40.899679   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:40.899849   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:40.900050   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:40.900230   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:40.900374   78671 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:19:40.984089   78671 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:19:40.988524   78671 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:19:40.988546   78671 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 19:19:40.988620   78671 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 19:19:40.988720   78671 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 19:19:40.988831   78671 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:19:40.998358   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:19:41.022499   78671 start.go:296] duration metric: took 125.729772ms for postStartSetup
	I0910 19:19:41.022543   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetConfigRaw
	I0910 19:19:41.023268   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetIP
	I0910 19:19:41.025886   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.026207   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:41.026231   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.026530   78671 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/config.json ...
	I0910 19:19:41.026722   78671 start.go:128] duration metric: took 23.838156127s to createHost
	I0910 19:19:41.026745   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:41.028741   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.029047   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:41.029065   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.029247   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:41.029425   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:41.029649   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:41.029776   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:41.029950   78671 main.go:141] libmachine: Using SSH client type: native
	I0910 19:19:41.030158   78671 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.46 22 <nil> <nil>}
	I0910 19:19:41.030170   78671 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:19:41.137899   78671 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725995981.112256836
	
	I0910 19:19:41.137920   78671 fix.go:216] guest clock: 1725995981.112256836
	I0910 19:19:41.137928   78671 fix.go:229] Guest: 2024-09-10 19:19:41.112256836 +0000 UTC Remote: 2024-09-10 19:19:41.026733962 +0000 UTC m=+23.944283818 (delta=85.522874ms)
	I0910 19:19:41.137962   78671 fix.go:200] guest clock delta is within tolerance: 85.522874ms
	I0910 19:19:41.137966   78671 start.go:83] releasing machines lock for "newest-cni-374465", held for 23.949463571s
	I0910 19:19:41.137985   78671 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:19:41.138230   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetIP
	I0910 19:19:41.140781   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.141119   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:41.141148   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.141289   78671 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:19:41.141714   78671 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:19:41.141902   78671 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:19:41.142017   78671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 19:19:41.142064   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:41.142080   78671 ssh_runner.go:195] Run: cat /version.json
	I0910 19:19:41.142103   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHHostname
	I0910 19:19:41.144700   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.144750   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.145051   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:41.145092   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.145118   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:41.145136   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:41.145409   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:41.145416   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHPort
	I0910 19:19:41.145594   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:41.145600   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHKeyPath
	I0910 19:19:41.145748   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:41.145763   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetSSHUsername
	I0910 19:19:41.145905   78671 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:19:41.145910   78671 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/newest-cni-374465/id_rsa Username:docker}
	I0910 19:19:41.250349   78671 ssh_runner.go:195] Run: systemctl --version
	I0910 19:19:41.256303   78671 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 19:19:41.416207   78671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 19:19:41.421974   78671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:19:41.422038   78671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:19:41.438941   78671 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:19:41.438981   78671 start.go:495] detecting cgroup driver to use...
	I0910 19:19:41.439043   78671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:19:41.456265   78671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:19:41.470755   78671 docker.go:217] disabling cri-docker service (if available) ...
	I0910 19:19:41.470815   78671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 19:19:41.485163   78671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 19:19:41.499053   78671 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 19:19:41.622121   78671 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 19:19:41.776641   78671 docker.go:233] disabling docker service ...
	I0910 19:19:41.776734   78671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 19:19:41.792391   78671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 19:19:41.804925   78671 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 19:19:41.934690   78671 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 19:19:42.065755   78671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 19:19:42.079491   78671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:19:42.098805   78671 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 19:19:42.098858   78671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:19:42.114425   78671 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 19:19:42.114498   78671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:19:42.126712   78671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:19:42.138520   78671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:19:42.149819   78671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:19:42.160810   78671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:19:42.171946   78671 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:19:42.189321   78671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:19:42.199774   78671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:19:42.209036   78671 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 19:19:42.209095   78671 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 19:19:42.222496   78671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:19:42.232312   78671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:19:42.358502   78671 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 19:19:42.447638   78671 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 19:19:42.447714   78671 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 19:19:42.453039   78671 start.go:563] Will wait 60s for crictl version
	I0910 19:19:42.453109   78671 ssh_runner.go:195] Run: which crictl
	I0910 19:19:42.456857   78671 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:19:42.501209   78671 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 19:19:42.501292   78671 ssh_runner.go:195] Run: crio --version
	I0910 19:19:42.530128   78671 ssh_runner.go:195] Run: crio --version
	I0910 19:19:42.559921   78671 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 19:19:42.560949   78671 main.go:141] libmachine: (newest-cni-374465) Calling .GetIP
	I0910 19:19:42.563453   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:42.563758   78671 main.go:141] libmachine: (newest-cni-374465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a9:68", ip: ""} in network mk-newest-cni-374465: {Iface:virbr2 ExpiryTime:2024-09-10 20:19:32 +0000 UTC Type:0 Mac:52:54:00:03:a9:68 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:newest-cni-374465 Clientid:01:52:54:00:03:a9:68}
	I0910 19:19:42.563785   78671 main.go:141] libmachine: (newest-cni-374465) DBG | domain newest-cni-374465 has defined IP address 192.168.61.46 and MAC address 52:54:00:03:a9:68 in network mk-newest-cni-374465
	I0910 19:19:42.564035   78671 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 19:19:42.568050   78671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:19:42.582870   78671 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0910 19:19:42.584332   78671 kubeadm.go:883] updating cluster {Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:19:42.584438   78671 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:19:42.584489   78671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:19:42.617779   78671 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 19:19:42.617865   78671 ssh_runner.go:195] Run: which lz4
	I0910 19:19:42.622068   78671 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 19:19:42.626195   78671 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:19:42.626227   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 19:19:43.958597   78671 crio.go:462] duration metric: took 1.336551864s to copy over tarball
	I0910 19:19:43.958698   78671 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 19:19:46.043013   78671 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.084286151s)
	I0910 19:19:46.043044   78671 crio.go:469] duration metric: took 2.08442107s to extract the tarball
	I0910 19:19:46.043052   78671 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 19:19:46.081357   78671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:19:46.132510   78671 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 19:19:46.132532   78671 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:19:46.132542   78671 kubeadm.go:934] updating node { 192.168.61.46 8443 v1.31.0 crio true true} ...
	I0910 19:19:46.132690   78671 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-374465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:19:46.132777   78671 ssh_runner.go:195] Run: crio config
	I0910 19:19:46.183651   78671 cni.go:84] Creating CNI manager for ""
	I0910 19:19:46.183676   78671 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:19:46.183687   78671 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0910 19:19:46.183716   78671 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.46 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-374465 NodeName:newest-cni-374465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.61.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:19:46.183914   78671 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-374465"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.46
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.46"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:19:46.183993   78671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:19:46.195880   78671 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:19:46.195933   78671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:19:46.206952   78671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0910 19:19:46.223833   78671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:19:46.241321   78671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0910 19:19:46.258735   78671 ssh_runner.go:195] Run: grep 192.168.61.46	control-plane.minikube.internal$ /etc/hosts
	I0910 19:19:46.262638   78671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.46	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:19:46.275586   78671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:19:46.409591   78671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:19:46.427650   78671 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465 for IP: 192.168.61.46
	I0910 19:19:46.427687   78671 certs.go:194] generating shared ca certs ...
	I0910 19:19:46.427707   78671 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:19:46.427878   78671 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 19:19:46.427975   78671 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 19:19:46.427993   78671 certs.go:256] generating profile certs ...
	I0910 19:19:46.428072   78671 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/client.key
	I0910 19:19:46.428093   78671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/client.crt with IP's: []
	I0910 19:19:46.666206   78671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/client.crt ...
	I0910 19:19:46.666243   78671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/client.crt: {Name:mkf97b1521e453ff12416c9e09cd616539875345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:19:46.666412   78671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/client.key ...
	I0910 19:19:46.666425   78671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/client.key: {Name:mk80986e339f42a239d3f6944a2291bffb8fc0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:19:46.666554   78671 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.key.29994378
	I0910 19:19:46.666581   78671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.crt.29994378 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.46]
	I0910 19:19:46.758925   78671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.crt.29994378 ...
	I0910 19:19:46.758954   78671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.crt.29994378: {Name:mk33419e11320846a996b56d5cb85edc22338ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:19:46.759111   78671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.key.29994378 ...
	I0910 19:19:46.759122   78671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.key.29994378: {Name:mk395847e6aab2c13b247f8be824e8708f34d381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:19:46.759216   78671 certs.go:381] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.crt.29994378 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.crt
	I0910 19:19:46.759302   78671 certs.go:385] copying /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.key.29994378 -> /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.key
	I0910 19:19:46.759359   78671 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.key
	I0910 19:19:46.759377   78671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.crt with IP's: []
	I0910 19:19:46.903670   78671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.crt ...
	I0910 19:19:46.903697   78671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.crt: {Name:mk4dfa09a497fd19b001b57b9f280dd99f13c439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:19:46.903862   78671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.key ...
	I0910 19:19:46.903874   78671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.key: {Name:mk54ff97a336c50ee6f7916e591e04012b3ffa05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:19:46.904035   78671 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 19:19:46.904070   78671 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 19:19:46.904079   78671 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 19:19:46.904100   78671 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 19:19:46.904139   78671 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 19:19:46.904166   78671 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 19:19:46.904202   78671 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:19:46.904766   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:19:46.933545   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 19:19:46.957647   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:19:46.983042   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 19:19:47.006308   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 19:19:47.031425   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:19:47.055499   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:19:47.079076   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 19:19:47.103266   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 19:19:47.127496   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 19:19:47.152204   78671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:19:47.178892   78671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:19:47.200900   78671 ssh_runner.go:195] Run: openssl version
	I0910 19:19:47.209218   78671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:19:47.223392   78671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:19:47.232455   78671 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:19:47.232521   78671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:19:47.239664   78671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:19:47.253988   78671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 19:19:47.266301   78671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 19:19:47.270843   78671 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 19:19:47.270895   78671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 19:19:47.276666   78671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 19:19:47.287814   78671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 19:19:47.298719   78671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 19:19:47.302924   78671 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 19:19:47.302969   78671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 19:19:47.308305   78671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:19:47.318971   78671 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:19:47.322826   78671 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 19:19:47.322881   78671 kubeadm.go:392] StartCluster: {Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:19:47.322972   78671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 19:19:47.323017   78671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:19:47.363015   78671 cri.go:89] found id: ""
	I0910 19:19:47.363076   78671 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:19:47.373868   78671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:19:47.383611   78671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:19:47.393280   78671 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:19:47.393297   78671 kubeadm.go:157] found existing configuration files:
	
	I0910 19:19:47.393339   78671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:19:47.402392   78671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:19:47.402438   78671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:19:47.411858   78671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:19:47.420998   78671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:19:47.421042   78671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:19:47.431646   78671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:19:47.440954   78671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:19:47.441006   78671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:19:47.450787   78671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:19:47.460091   78671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:19:47.460152   78671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:19:47.470025   78671 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:19:47.606612   78671 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 19:19:47.606745   78671 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:19:47.723058   78671 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:19:47.723221   78671 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:19:47.723360   78671 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 19:19:47.731963   78671 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:19:47.830979   78671 out.go:235]   - Generating certificates and keys ...
	I0910 19:19:47.831134   78671 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:19:47.831256   78671 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:19:47.990338   78671 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 19:19:48.075133   78671 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 19:19:48.329795   78671 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 19:19:48.475282   78671 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 19:19:48.822709   78671 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 19:19:48.822950   78671 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-374465] and IPs [192.168.61.46 127.0.0.1 ::1]
	I0910 19:19:49.182283   78671 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 19:19:49.182706   78671 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-374465] and IPs [192.168.61.46 127.0.0.1 ::1]
	I0910 19:19:49.284090   78671 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 19:19:49.427792   78671 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 19:19:49.570151   78671 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 19:19:49.570427   78671 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:19:49.645757   78671 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:19:49.769801   78671 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:19:50.154729   78671 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:19:50.245245   78671 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:19:50.450730   78671 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:19:50.451370   78671 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:19:50.454826   78671 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.229900595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995992229877506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bf0cf18-067e-479a-9da3-c91fe8df6d81 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.230650808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fea824cd-4884-42c7-943d-afbb25a24ee0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.230722006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fea824cd-4884-42c7-943d-afbb25a24ee0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.231028319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91,PodSandboxId:ad112d1b49173406b211832777d2a4390fa2c3edba52ce58b3cecd45d0abe25b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725995070621710227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd77229d-0209-459f-ac5e-96317c425f60,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6,PodSandboxId:3f70057fd3e1e734f1da21d57f2d46424b49b6d27fb27bcb5d96533a4661375c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069699443775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bsp9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53cd67b5-b542-4b40-adf9-3aba78407735,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec,PodSandboxId:0c280d3aa3477243e23556c3523287fe0dcdf8bf1e28ca28f144f5f3f8174f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069734358994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
66ea46-d3ad-44e4-b9fc-c7ea5c44ac15,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7,PodSandboxId:3b6ce16a74304a93a1fb7dfdaca51600ca0799e9a32f9f21500c7e4ea343a451,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725995068750602687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwzhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f03fe8e3-bee7-4805-a1e9-83494f33105c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3,PodSandboxId:159f5089030cf1fc1dbda76d7ae4d886c637252905b7753376110260f746a900,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725995057967469975,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e3770379cbff17e47846b6d74e2aec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2,PodSandboxId:aa60d460716120ed6687da3dac83c3a806349e88d73a25fc0ca88ec46d056023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725995057941216061,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304,PodSandboxId:0fccef98c1bc1c3c65ba25cca14eaa722dbb92f836b278e098053564b2b884c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725995057912652155,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3176d1569984cba135ac1c183e76c043,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073,PodSandboxId:f6614203bea57cdc4b22bb6dde5e1705098501678f7204b4e86a3a3e10847d2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725995057867312138,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a0ba536e0e91b581bfa3eeec42067e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16,PodSandboxId:dd6a911567a0e33c229ceb4d2602bd6b819d60f6f707571ff407a755cc2601a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725994769351562769,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fea824cd-4884-42c7-943d-afbb25a24ee0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.281222872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=215d266d-31bb-49ce-9f58-558279d08f1b name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.281296071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=215d266d-31bb-49ce-9f58-558279d08f1b name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.282791603Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a708a1de-d4b3-4a67-b5b0-40e113967397 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.283286996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995992283257389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a708a1de-d4b3-4a67-b5b0-40e113967397 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.283813128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a12a0361-17d0-44a8-a648-f5068e964b6d name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.283864999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a12a0361-17d0-44a8-a648-f5068e964b6d name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.284112584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91,PodSandboxId:ad112d1b49173406b211832777d2a4390fa2c3edba52ce58b3cecd45d0abe25b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725995070621710227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd77229d-0209-459f-ac5e-96317c425f60,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6,PodSandboxId:3f70057fd3e1e734f1da21d57f2d46424b49b6d27fb27bcb5d96533a4661375c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069699443775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bsp9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53cd67b5-b542-4b40-adf9-3aba78407735,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec,PodSandboxId:0c280d3aa3477243e23556c3523287fe0dcdf8bf1e28ca28f144f5f3f8174f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069734358994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
66ea46-d3ad-44e4-b9fc-c7ea5c44ac15,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7,PodSandboxId:3b6ce16a74304a93a1fb7dfdaca51600ca0799e9a32f9f21500c7e4ea343a451,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725995068750602687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwzhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f03fe8e3-bee7-4805-a1e9-83494f33105c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3,PodSandboxId:159f5089030cf1fc1dbda76d7ae4d886c637252905b7753376110260f746a900,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725995057967469975,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e3770379cbff17e47846b6d74e2aec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2,PodSandboxId:aa60d460716120ed6687da3dac83c3a806349e88d73a25fc0ca88ec46d056023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725995057941216061,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304,PodSandboxId:0fccef98c1bc1c3c65ba25cca14eaa722dbb92f836b278e098053564b2b884c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725995057912652155,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3176d1569984cba135ac1c183e76c043,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073,PodSandboxId:f6614203bea57cdc4b22bb6dde5e1705098501678f7204b4e86a3a3e10847d2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725995057867312138,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a0ba536e0e91b581bfa3eeec42067e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16,PodSandboxId:dd6a911567a0e33c229ceb4d2602bd6b819d60f6f707571ff407a755cc2601a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725994769351562769,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a12a0361-17d0-44a8-a648-f5068e964b6d name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.328540131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1df036c8-ad66-4ee0-82cf-8b8b8233d980 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.328656683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1df036c8-ad66-4ee0-82cf-8b8b8233d980 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.330111981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9caa99af-a4d1-4e77-8fdb-d0155294bbd5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.330630231Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995992330585682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9caa99af-a4d1-4e77-8fdb-d0155294bbd5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.331191314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fe0615e-f42c-4806-a09d-e89893b4feff name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.331265415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fe0615e-f42c-4806-a09d-e89893b4feff name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.331546886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91,PodSandboxId:ad112d1b49173406b211832777d2a4390fa2c3edba52ce58b3cecd45d0abe25b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725995070621710227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd77229d-0209-459f-ac5e-96317c425f60,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6,PodSandboxId:3f70057fd3e1e734f1da21d57f2d46424b49b6d27fb27bcb5d96533a4661375c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069699443775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bsp9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53cd67b5-b542-4b40-adf9-3aba78407735,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec,PodSandboxId:0c280d3aa3477243e23556c3523287fe0dcdf8bf1e28ca28f144f5f3f8174f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069734358994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
66ea46-d3ad-44e4-b9fc-c7ea5c44ac15,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7,PodSandboxId:3b6ce16a74304a93a1fb7dfdaca51600ca0799e9a32f9f21500c7e4ea343a451,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725995068750602687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwzhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f03fe8e3-bee7-4805-a1e9-83494f33105c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3,PodSandboxId:159f5089030cf1fc1dbda76d7ae4d886c637252905b7753376110260f746a900,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725995057967469975,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e3770379cbff17e47846b6d74e2aec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2,PodSandboxId:aa60d460716120ed6687da3dac83c3a806349e88d73a25fc0ca88ec46d056023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725995057941216061,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304,PodSandboxId:0fccef98c1bc1c3c65ba25cca14eaa722dbb92f836b278e098053564b2b884c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725995057912652155,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3176d1569984cba135ac1c183e76c043,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073,PodSandboxId:f6614203bea57cdc4b22bb6dde5e1705098501678f7204b4e86a3a3e10847d2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725995057867312138,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a0ba536e0e91b581bfa3eeec42067e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16,PodSandboxId:dd6a911567a0e33c229ceb4d2602bd6b819d60f6f707571ff407a755cc2601a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725994769351562769,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fe0615e-f42c-4806-a09d-e89893b4feff name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.373731605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0097069-bb0d-4591-9f67-1fc53e20d4c9 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.373842871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0097069-bb0d-4591-9f67-1fc53e20d4c9 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.375480489Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ce1e123-b10c-43e5-b7ce-36826dfc446e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.376071886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995992375925472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ce1e123-b10c-43e5-b7ce-36826dfc446e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.376692899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8280556b-ce55-477c-91c2-65a363aaee7c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.376749866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8280556b-ce55-477c-91c2-65a363aaee7c name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:52 no-preload-347802 crio[712]: time="2024-09-10 19:19:52.377017122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91,PodSandboxId:ad112d1b49173406b211832777d2a4390fa2c3edba52ce58b3cecd45d0abe25b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725995070621710227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd77229d-0209-459f-ac5e-96317c425f60,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6,PodSandboxId:3f70057fd3e1e734f1da21d57f2d46424b49b6d27fb27bcb5d96533a4661375c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069699443775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bsp9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53cd67b5-b542-4b40-adf9-3aba78407735,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec,PodSandboxId:0c280d3aa3477243e23556c3523287fe0dcdf8bf1e28ca28f144f5f3f8174f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725995069734358994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
66ea46-d3ad-44e4-b9fc-c7ea5c44ac15,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7,PodSandboxId:3b6ce16a74304a93a1fb7dfdaca51600ca0799e9a32f9f21500c7e4ea343a451,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725995068750602687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwzhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f03fe8e3-bee7-4805-a1e9-83494f33105c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3,PodSandboxId:159f5089030cf1fc1dbda76d7ae4d886c637252905b7753376110260f746a900,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725995057967469975,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e3770379cbff17e47846b6d74e2aec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2,PodSandboxId:aa60d460716120ed6687da3dac83c3a806349e88d73a25fc0ca88ec46d056023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725995057941216061,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304,PodSandboxId:0fccef98c1bc1c3c65ba25cca14eaa722dbb92f836b278e098053564b2b884c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725995057912652155,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3176d1569984cba135ac1c183e76c043,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073,PodSandboxId:f6614203bea57cdc4b22bb6dde5e1705098501678f7204b4e86a3a3e10847d2d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725995057867312138,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a0ba536e0e91b581bfa3eeec42067e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16,PodSandboxId:dd6a911567a0e33c229ceb4d2602bd6b819d60f6f707571ff407a755cc2601a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725994769351562769,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-347802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98dd89d8dae8c66b48ffae121412aa0,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8280556b-ce55-477c-91c2-65a363aaee7c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e348d2a5d1489       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   ad112d1b49173       storage-provisioner
	35969d1ba960c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   0c280d3aa3477       coredns-6f6b679f8f-hlbrz
	de828df738c57       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   3f70057fd3e1e       coredns-6f6b679f8f-bsp9f
	631aa6381282f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   15 minutes ago      Running             kube-proxy                0                   3b6ce16a74304       kube-proxy-gwzhs
	cc75973e43d51       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   159f5089030cf       etcd-no-preload-347802
	8968d7d3a3c02       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   15 minutes ago      Running             kube-apiserver            2                   aa60d46071612       kube-apiserver-no-preload-347802
	56abb8524eda6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   15 minutes ago      Running             kube-controller-manager   2                   0fccef98c1bc1       kube-controller-manager-no-preload-347802
	24feaaf348edf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   15 minutes ago      Running             kube-scheduler            2                   f6614203bea57       kube-scheduler-no-preload-347802
	ec8014f1b16bf       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   20 minutes ago      Exited              kube-apiserver            1                   dd6a911567a0e       kube-apiserver-no-preload-347802
	
	
	==> coredns [35969d1ba960c5faec33926528d2f07f401536183c4e0c8734e36606323d9cec] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [de828df738c57fe582d38572a48c8273fcbe5bb1c41abd2c6f9bc7f5730b7fa6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-347802
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-347802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=no-preload-347802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T19_04_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 19:04:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-347802
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:19:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:14:46 +0000   Tue, 10 Sep 2024 19:04:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:14:46 +0000   Tue, 10 Sep 2024 19:04:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:14:46 +0000   Tue, 10 Sep 2024 19:04:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:14:46 +0000   Tue, 10 Sep 2024 19:04:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.138
	  Hostname:    no-preload-347802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c3af0b1f4c84b17b5f7a7fa19478efe
	  System UUID:                0c3af0b1-f4c8-4b17-b5f7-a7fa19478efe
	  Boot ID:                    45e56c11-a123-4953-95e4-32947180dc98
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-bsp9f                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-hlbrz                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-no-preload-347802                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-no-preload-347802             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-no-preload-347802    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-gwzhs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-no-preload-347802             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-cz4tz              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-347802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-347802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-347802 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-347802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-347802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-347802 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-347802 event: Registered Node no-preload-347802 in Controller
	  Normal  CIDRAssignmentFailed     15m                cidrAllocator    Node no-preload-347802 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.040532] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.757802] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.373808] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep10 18:59] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.786004] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.054054] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053193] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.175985] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.147648] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.280478] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +15.753033] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.060564] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.210370] systemd-fstab-generator[1422]: Ignoring "noauto" option for root device
	[  +2.807161] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.299675] kauditd_printk_skb: 59 callbacks suppressed
	[  +8.421782] kauditd_printk_skb: 26 callbacks suppressed
	[Sep10 19:04] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.364469] systemd-fstab-generator[3063]: Ignoring "noauto" option for root device
	[  +4.536178] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.015224] systemd-fstab-generator[3385]: Ignoring "noauto" option for root device
	[  +5.261816] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.244627] systemd-fstab-generator[3542]: Ignoring "noauto" option for root device
	[  +8.413040] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [cc75973e43d51392bdd21c0ffd7f7939030f85600f141f5da8161615bda8d8e3] <==
	{"level":"info","ts":"2024-09-10T19:04:19.269094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-10T19:04:19.269129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b received MsgPreVoteResp from 8b11dde95a80b86b at term 1"}
	{"level":"info","ts":"2024-09-10T19:04:19.269150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b became candidate at term 2"}
	{"level":"info","ts":"2024-09-10T19:04:19.269157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b received MsgVoteResp from 8b11dde95a80b86b at term 2"}
	{"level":"info","ts":"2024-09-10T19:04:19.269166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b became leader at term 2"}
	{"level":"info","ts":"2024-09-10T19:04:19.269173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8b11dde95a80b86b elected leader 8b11dde95a80b86b at term 2"}
	{"level":"info","ts":"2024-09-10T19:04:19.273402Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8b11dde95a80b86b","local-member-attributes":"{Name:no-preload-347802 ClientURLs:[https://192.168.50.138:2379]}","request-path":"/0/members/8b11dde95a80b86b/attributes","cluster-id":"ab0e41ccc9bb2ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T19:04:19.273488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:04:19.273536Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:04:19.274050Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:04:19.276629Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:04:19.279251Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ab0e41ccc9bb2ba","local-member-id":"8b11dde95a80b86b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:04:19.279366Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:04:19.279419Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T19:04:19.279890Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:04:19.280697Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T19:04:19.284902Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.138:2379"}
	{"level":"info","ts":"2024-09-10T19:04:19.283427Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T19:04:19.287062Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T19:14:19.333437Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-09-10T19:14:19.343174Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":722,"took":"9.008088ms","hash":3175551300,"current-db-size-bytes":2281472,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2281472,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-10T19:14:19.343288Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3175551300,"revision":722,"compact-revision":-1}
	{"level":"info","ts":"2024-09-10T19:19:19.341248Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":965}
	{"level":"info","ts":"2024-09-10T19:19:19.345543Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":965,"took":"3.492402ms","hash":1455674774,"current-db-size-bytes":2281472,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-10T19:19:19.345653Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1455674774,"revision":965,"compact-revision":722}
	
	
	==> kernel <==
	 19:19:52 up 20 min,  0 users,  load average: 0.17, 0.22, 0.24
	Linux no-preload-347802 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8968d7d3a3c02537ada38712e94ba93157469e5d6031b68419aaedf967bd6ad2] <==
	I0910 19:15:21.787476       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:15:21.787518       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:17:21.788222       1 handler_proxy.go:99] no RequestInfo found in the context
	W0910 19:17:21.788223       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:17:21.788711       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0910 19:17:21.788767       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0910 19:17:21.789933       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:17:21.790042       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:19:20.789596       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:19:20.789794       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0910 19:19:21.792270       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:19:21.792344       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0910 19:19:21.792270       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:19:21.792478       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0910 19:19:21.793630       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:19:21.793739       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ec8014f1b16bfa976debb83c998a45fabfdf47d9320c7801ea5fc627951a2d16] <==
	W0910 19:04:10.532296       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:11.263919       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:12.264891       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:13.725208       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:13.918186       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:13.950493       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:13.978202       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.131767       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.510408       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.701303       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.817808       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.824507       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.880277       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.933704       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.963834       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.978712       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:14.985553       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.073868       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.102785       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.106180       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.111568       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.159299       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.217746       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.323105       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0910 19:04:15.326522       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [56abb8524eda66af1df21c0d84c686b63d9e3558b4eb41cb1724a5cca4de3304] <==
	I0910 19:14:28.326458       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:14:46.530720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-347802"
	E0910 19:14:57.866031       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:14:58.335275       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:15:27.873083       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:15:28.342836       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:15:35.607420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="162.138µs"
	I0910 19:15:49.606239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="118.325µs"
	E0910 19:15:57.879309       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:15:58.351313       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:16:27.886073       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:16:28.364750       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:16:57.891801       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:16:58.372303       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:17:27.899368       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:17:28.381668       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:17:57.905901       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:17:58.389406       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:18:27.913133       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:18:28.410274       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:18:57.919871       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:18:58.419184       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:19:27.929695       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:19:28.428032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:19:52.809131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-347802"
	
	
	==> kube-proxy [631aa6381282fd68571f32cd71582e9492dce784646d0f2054f0bd21ba8730b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 19:04:29.204580       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 19:04:29.241650       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.138"]
	E0910 19:04:29.241733       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 19:04:29.317028       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 19:04:29.317118       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 19:04:29.317149       1 server_linux.go:169] "Using iptables Proxier"
	I0910 19:04:29.323087       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 19:04:29.323380       1 server.go:483] "Version info" version="v1.31.0"
	I0910 19:04:29.323392       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:04:29.334518       1 config.go:104] "Starting endpoint slice config controller"
	I0910 19:04:29.334547       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 19:04:29.334569       1 config.go:197] "Starting service config controller"
	I0910 19:04:29.334573       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 19:04:29.334894       1 config.go:326] "Starting node config controller"
	I0910 19:04:29.334903       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 19:04:29.435504       1 shared_informer.go:320] Caches are synced for node config
	I0910 19:04:29.435550       1 shared_informer.go:320] Caches are synced for service config
	I0910 19:04:29.435584       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [24feaaf348edf5d7d757a151448306d3a0e8dcb50cbf1b71477029b8ae6f1073] <==
	W0910 19:04:20.813636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 19:04:20.813662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:20.813705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 19:04:20.813732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:20.813864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 19:04:20.813993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.787002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 19:04:21.787057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.793353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 19:04:21.793400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.838149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 19:04:21.838283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.918257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 19:04:21.918494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.932093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 19:04:21.932147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:21.961218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 19:04:21.961385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:22.012205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 19:04:22.012312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:22.077814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 19:04:22.078374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 19:04:22.176014       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 19:04:22.176121       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0910 19:04:25.396224       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 19:18:37 no-preload-347802 kubelet[3392]: E0910 19:18:37.588839    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:18:43 no-preload-347802 kubelet[3392]: E0910 19:18:43.860616    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995923857181751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:18:43 no-preload-347802 kubelet[3392]: E0910 19:18:43.860644    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995923857181751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:18:49 no-preload-347802 kubelet[3392]: E0910 19:18:49.588366    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:18:53 no-preload-347802 kubelet[3392]: E0910 19:18:53.862746    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995933862166166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:18:53 no-preload-347802 kubelet[3392]: E0910 19:18:53.863166    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995933862166166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:03 no-preload-347802 kubelet[3392]: E0910 19:19:03.589081    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:19:03 no-preload-347802 kubelet[3392]: E0910 19:19:03.864473    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995943864211457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:03 no-preload-347802 kubelet[3392]: E0910 19:19:03.864527    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995943864211457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:13 no-preload-347802 kubelet[3392]: E0910 19:19:13.866570    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995953866197841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:13 no-preload-347802 kubelet[3392]: E0910 19:19:13.866616    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995953866197841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:18 no-preload-347802 kubelet[3392]: E0910 19:19:18.589215    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:19:23 no-preload-347802 kubelet[3392]: E0910 19:19:23.635707    3392 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:19:23 no-preload-347802 kubelet[3392]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:19:23 no-preload-347802 kubelet[3392]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:19:23 no-preload-347802 kubelet[3392]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:19:23 no-preload-347802 kubelet[3392]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:19:23 no-preload-347802 kubelet[3392]: E0910 19:19:23.869386    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995963868800884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:23 no-preload-347802 kubelet[3392]: E0910 19:19:23.869421    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995963868800884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:33 no-preload-347802 kubelet[3392]: E0910 19:19:33.589107    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	Sep 10 19:19:33 no-preload-347802 kubelet[3392]: E0910 19:19:33.871568    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995973871241003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:33 no-preload-347802 kubelet[3392]: E0910 19:19:33.871699    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995973871241003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:43 no-preload-347802 kubelet[3392]: E0910 19:19:43.873690    3392 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995983873338562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:43 no-preload-347802 kubelet[3392]: E0910 19:19:43.873731    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995983873338562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:46 no-preload-347802 kubelet[3392]: E0910 19:19:46.588257    3392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cz4tz" podUID="22d16ca9-922b-40d8-97d1-47a44ba70aa3"
	
	
	==> storage-provisioner [e348d2a5d14890caf884ea1ce039dd4f546dfadbc1d8dbe0a3bf1382e197ca91] <==
	I0910 19:04:30.778938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 19:04:30.793237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 19:04:30.793313       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 19:04:30.808205       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 19:04:30.810141       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-347802_c1894828-b505-47d5-b2d2-2ccc297ff610!
	I0910 19:04:30.818308       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23084522-d675-468e-9a48-deddae300d23", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-347802_c1894828-b505-47d5-b2d2-2ccc297ff610 became leader
	I0910 19:04:30.910724       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-347802_c1894828-b505-47d5-b2d2-2ccc297ff610!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-347802 -n no-preload-347802
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-347802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cz4tz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-347802 describe pod metrics-server-6867b74b74-cz4tz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-347802 describe pod metrics-server-6867b74b74-cz4tz: exit status 1 (62.903818ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cz4tz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-347802 describe pod metrics-server-6867b74b74-cz4tz: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (368.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (384.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-836868 -n embed-certs-836868
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-10 19:20:16.28691375 +0000 UTC m=+6676.110688505
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-836868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-836868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.508µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-836868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-836868 -n embed-certs-836868
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-836868 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-836868 logs -n 25: (1.186269108s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-186737 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | disable-driver-mounts-186737                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-836868            | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-347802             | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-557504  | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-836868                 | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-432422        | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-347802                  | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-557504       | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-432422             | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 19:19 UTC | 10 Sep 24 19:19 UTC |
	| start   | -p newest-cni-374465 --memory=2200 --alsologtostderr   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:19 UTC | 10 Sep 24 19:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 19:19 UTC | 10 Sep 24 19:19 UTC |
	| addons  | enable metrics-server -p newest-cni-374465             | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-374465                                   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-374465                  | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC | 10 Sep 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-374465 --memory=2200 --alsologtostderr   | newest-cni-374465            | jenkins | v1.34.0 | 10 Sep 24 19:20 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 19:20:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 19:20:15.989289   79419 out.go:345] Setting OutFile to fd 1 ...
	I0910 19:20:15.989379   79419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:20:15.989385   79419 out.go:358] Setting ErrFile to fd 2...
	I0910 19:20:15.989389   79419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 19:20:15.989546   79419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 19:20:15.990098   79419 out.go:352] Setting JSON to false
	I0910 19:20:15.990948   79419 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7368,"bootTime":1725988648,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 19:20:15.991000   79419 start.go:139] virtualization: kvm guest
	I0910 19:20:15.993101   79419 out.go:177] * [newest-cni-374465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 19:20:15.994337   79419 notify.go:220] Checking for updates...
	I0910 19:20:15.994349   79419 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 19:20:15.995671   79419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 19:20:15.996919   79419 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:20:15.998179   79419 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 19:20:15.999355   79419 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 19:20:16.000420   79419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 19:20:16.002129   79419 config.go:182] Loaded profile config "newest-cni-374465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:20:16.002794   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:16.002866   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:16.017632   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
	I0910 19:20:16.017983   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:16.018522   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:16.018544   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:16.018898   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:16.019068   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:16.019291   79419 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 19:20:16.019553   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:16.019585   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:16.034064   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0910 19:20:16.034534   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:16.035060   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:16.035091   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:16.035428   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:16.035624   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:16.074812   79419 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 19:20:16.075879   79419 start.go:297] selected driver: kvm2
	I0910 19:20:16.075895   79419 start.go:901] validating driver "kvm2" against &{Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-374465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:20:16.075983   79419 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 19:20:16.076614   79419 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 19:20:16.076670   79419 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 19:20:16.091258   79419 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 19:20:16.091609   79419 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0910 19:20:16.091666   79419 cni.go:84] Creating CNI manager for ""
	I0910 19:20:16.091679   79419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:20:16.091714   79419 start.go:340] cluster config:
	{Name:newest-cni-374465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-374465 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:20:16.091829   79419 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 19:20:16.093929   79419 out.go:177] * Starting "newest-cni-374465" primary control-plane node in "newest-cni-374465" cluster
	I0910 19:20:16.095091   79419 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:20:16.095146   79419 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 19:20:16.095159   79419 cache.go:56] Caching tarball of preloaded images
	I0910 19:20:16.095236   79419 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 19:20:16.095246   79419 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 19:20:16.095363   79419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/newest-cni-374465/config.json ...
	I0910 19:20:16.095691   79419 start.go:360] acquireMachinesLock for newest-cni-374465: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 19:20:16.095760   79419 start.go:364] duration metric: took 41.084µs to acquireMachinesLock for "newest-cni-374465"
	I0910 19:20:16.095781   79419 start.go:96] Skipping create...Using existing machine configuration
	I0910 19:20:16.095793   79419 fix.go:54] fixHost starting: 
	I0910 19:20:16.096144   79419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:20:16.096176   79419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:20:16.110152   79419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0910 19:20:16.110556   79419 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:20:16.111068   79419 main.go:141] libmachine: Using API Version  1
	I0910 19:20:16.111093   79419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:20:16.111563   79419 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:20:16.111732   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	I0910 19:20:16.111887   79419 main.go:141] libmachine: (newest-cni-374465) Calling .GetState
	I0910 19:20:16.113626   79419 fix.go:112] recreateIfNeeded on newest-cni-374465: state=Stopped err=<nil>
	I0910 19:20:16.113648   79419 main.go:141] libmachine: (newest-cni-374465) Calling .DriverName
	W0910 19:20:16.113806   79419 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 19:20:16.115370   79419 out.go:177] * Restarting existing kvm2 VM for "newest-cni-374465" ...
	
	
	==> CRI-O <==
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.790408387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996016790385892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=057dda35-1a0c-4d59-9fc0-685046641481 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.792440805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4a8d45c-6654-45ce-9b82-99d4ec2e6df8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.792565094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4a8d45c-6654-45ce-9b82-99d4ec2e6df8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.792911737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994854862538360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd1bbbab084640d79766d7d14d3cdc5c66bd653aaae7d35f5cb8135b378c4efc,PodSandboxId:a5aeeb32481e552762401be5447df77c550225026dc65b3b81008bb8152ef1c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994833794038972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313,PodSandboxId:88ef68c9eb85921397b1c48b3c9679d1315503d56a2c0a25898df69bad8097da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994831701705826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mt78p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bbfe99-3c36-4095-b7e8-ee0861f9973f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994824014297214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e,PodSandboxId:3cffcbe8ca573f781fa2a7ad185c1e6cfad19524b6a4216d75c164ad81e43c6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994823988045781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fddv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f0b1df-26eb-4a6c-957d-0b7655309
cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34,PodSandboxId:e22af3fbe04a9ba6fe78408371ec5436af690308aa766830d6b7912bf4cabd5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994820235638728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b8a13748374dd9556b4c03e74bc5d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3,PodSandboxId:03f9007efb7a7151b7ebf90f8a2a207dad361176bd3eb7d25992969c784d8bd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994820247337830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7662deb051c4e63b75dd3b02a637575b,},Annotations:map[string
]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293,PodSandboxId:5f4dee624e476b7a12bc6013ffdeff28c153726fa728c12051654cba7d2235ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994820255289950,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e0693761ce7b6880e7e2b2f5137118,},Annotations:map[string]string{io.k
ubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc,PodSandboxId:cfa9f55fd46f24a04d4dc3a0de977528d3c98e9174f7c8a62322251c33d75c19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994820225407769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e867521b37d3ca565ac0de14a5983,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4a8d45c-6654-45ce-9b82-99d4ec2e6df8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.836374750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb92d32e-2e8f-4c59-9c94-ab422c0c4c8b name=/runtime.v1.RuntimeService/Version
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.836544619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb92d32e-2e8f-4c59-9c94-ab422c0c4c8b name=/runtime.v1.RuntimeService/Version
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.838235764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c6b50ed-0391-46e9-8a0c-71cc82a92b0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.838676939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996016838654287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c6b50ed-0391-46e9-8a0c-71cc82a92b0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.839517044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bc08760-d88c-4728-9d3e-ec842af615a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.839573579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bc08760-d88c-4728-9d3e-ec842af615a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.840162026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994854862538360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd1bbbab084640d79766d7d14d3cdc5c66bd653aaae7d35f5cb8135b378c4efc,PodSandboxId:a5aeeb32481e552762401be5447df77c550225026dc65b3b81008bb8152ef1c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994833794038972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313,PodSandboxId:88ef68c9eb85921397b1c48b3c9679d1315503d56a2c0a25898df69bad8097da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994831701705826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mt78p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bbfe99-3c36-4095-b7e8-ee0861f9973f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994824014297214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e,PodSandboxId:3cffcbe8ca573f781fa2a7ad185c1e6cfad19524b6a4216d75c164ad81e43c6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994823988045781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fddv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f0b1df-26eb-4a6c-957d-0b7655309
cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34,PodSandboxId:e22af3fbe04a9ba6fe78408371ec5436af690308aa766830d6b7912bf4cabd5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994820235638728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b8a13748374dd9556b4c03e74bc5d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3,PodSandboxId:03f9007efb7a7151b7ebf90f8a2a207dad361176bd3eb7d25992969c784d8bd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994820247337830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7662deb051c4e63b75dd3b02a637575b,},Annotations:map[string
]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293,PodSandboxId:5f4dee624e476b7a12bc6013ffdeff28c153726fa728c12051654cba7d2235ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994820255289950,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e0693761ce7b6880e7e2b2f5137118,},Annotations:map[string]string{io.k
ubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc,PodSandboxId:cfa9f55fd46f24a04d4dc3a0de977528d3c98e9174f7c8a62322251c33d75c19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994820225407769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e867521b37d3ca565ac0de14a5983,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bc08760-d88c-4728-9d3e-ec842af615a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.884145694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2ca56ff-11f4-4706-9625-a8f307033b56 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.884220499Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2ca56ff-11f4-4706-9625-a8f307033b56 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.885232544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=858b4b5b-26a8-476e-a087-6dc9db5a4c3a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.885671909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996016885648678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=858b4b5b-26a8-476e-a087-6dc9db5a4c3a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.886320943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4514fb7-ecf5-44a6-823e-ac835aff69fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.886371096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4514fb7-ecf5-44a6-823e-ac835aff69fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.886631838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994854862538360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd1bbbab084640d79766d7d14d3cdc5c66bd653aaae7d35f5cb8135b378c4efc,PodSandboxId:a5aeeb32481e552762401be5447df77c550225026dc65b3b81008bb8152ef1c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994833794038972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313,PodSandboxId:88ef68c9eb85921397b1c48b3c9679d1315503d56a2c0a25898df69bad8097da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994831701705826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mt78p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bbfe99-3c36-4095-b7e8-ee0861f9973f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994824014297214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e,PodSandboxId:3cffcbe8ca573f781fa2a7ad185c1e6cfad19524b6a4216d75c164ad81e43c6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994823988045781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fddv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f0b1df-26eb-4a6c-957d-0b7655309
cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34,PodSandboxId:e22af3fbe04a9ba6fe78408371ec5436af690308aa766830d6b7912bf4cabd5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994820235638728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b8a13748374dd9556b4c03e74bc5d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3,PodSandboxId:03f9007efb7a7151b7ebf90f8a2a207dad361176bd3eb7d25992969c784d8bd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994820247337830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7662deb051c4e63b75dd3b02a637575b,},Annotations:map[string
]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293,PodSandboxId:5f4dee624e476b7a12bc6013ffdeff28c153726fa728c12051654cba7d2235ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994820255289950,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e0693761ce7b6880e7e2b2f5137118,},Annotations:map[string]string{io.k
ubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc,PodSandboxId:cfa9f55fd46f24a04d4dc3a0de977528d3c98e9174f7c8a62322251c33d75c19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994820225407769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e867521b37d3ca565ac0de14a5983,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4514fb7-ecf5-44a6-823e-ac835aff69fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.923191642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d63a0ae-7e09-46d0-b988-429caf81c2fe name=/runtime.v1.RuntimeService/Version
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.923260185Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d63a0ae-7e09-46d0-b988-429caf81c2fe name=/runtime.v1.RuntimeService/Version
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.924690307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=818f8776-fbac-4fc8-b99b-c0a2aa70dd6c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.925086366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996016925058176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=818f8776-fbac-4fc8-b99b-c0a2aa70dd6c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.925604284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2140867c-1941-408f-a0de-4eb8cc51f3c0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.925654719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2140867c-1941-408f-a0de-4eb8cc51f3c0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:20:16 embed-certs-836868 crio[704]: time="2024-09-10 19:20:16.925839349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725994854862538360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd1bbbab084640d79766d7d14d3cdc5c66bd653aaae7d35f5cb8135b378c4efc,PodSandboxId:a5aeeb32481e552762401be5447df77c550225026dc65b3b81008bb8152ef1c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1725994833794038972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313,PodSandboxId:88ef68c9eb85921397b1c48b3c9679d1315503d56a2c0a25898df69bad8097da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725994831701705826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mt78p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bbfe99-3c36-4095-b7e8-ee0861f9973f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f,PodSandboxId:241ced956ebccecc9cd11be7255ea9eacc60a5d7fb579b5a5cb928683f7c5af5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725994824014297214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
47ed78c5-1cce-4d50-a023-5c356f331035,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e,PodSandboxId:3cffcbe8ca573f781fa2a7ad185c1e6cfad19524b6a4216d75c164ad81e43c6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725994823988045781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fddv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f0b1df-26eb-4a6c-957d-0b7655309
cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34,PodSandboxId:e22af3fbe04a9ba6fe78408371ec5436af690308aa766830d6b7912bf4cabd5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725994820235638728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28b8a13748374dd9556b4c03e74bc5d,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3,PodSandboxId:03f9007efb7a7151b7ebf90f8a2a207dad361176bd3eb7d25992969c784d8bd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725994820247337830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7662deb051c4e63b75dd3b02a637575b,},Annotations:map[string
]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293,PodSandboxId:5f4dee624e476b7a12bc6013ffdeff28c153726fa728c12051654cba7d2235ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725994820255289950,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e0693761ce7b6880e7e2b2f5137118,},Annotations:map[string]string{io.k
ubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc,PodSandboxId:cfa9f55fd46f24a04d4dc3a0de977528d3c98e9174f7c8a62322251c33d75c19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725994820225407769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-836868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7e867521b37d3ca565ac0de14a5983,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2140867c-1941-408f-a0de-4eb8cc51f3c0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11c23ffac9396       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   241ced956ebcc       storage-provisioner
	fd1bbbab08464       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   a5aeeb32481e5       busybox
	6ba324381f8f8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   88ef68c9eb859       coredns-6f6b679f8f-mt78p
	2986c78197602       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   241ced956ebcc       storage-provisioner
	f113a6d74aef2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      19 minutes ago      Running             kube-proxy                1                   3cffcbe8ca573       kube-proxy-4fddv
	b9ad0bbb3de47       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      19 minutes ago      Running             kube-apiserver            1                   5f4dee624e476       kube-apiserver-embed-certs-836868
	2582ec871deb8       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      19 minutes ago      Running             kube-controller-manager   1                   03f9007efb7a7       kube-controller-manager-embed-certs-836868
	4f0241a4c8a31       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   e22af3fbe04a9       etcd-embed-certs-836868
	6a3fc78649970       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      19 minutes ago      Running             kube-scheduler            1                   cfa9f55fd46f2       kube-scheduler-embed-certs-836868
	
	
	==> coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43525 - 21576 "HINFO IN 8786414796633565538.1486483400192273916. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010038973s
	
	
	==> describe nodes <==
	Name:               embed-certs-836868
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-836868
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=embed-certs-836868
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T18_51_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 18:51:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-836868
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 19:20:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 19:16:13 +0000   Tue, 10 Sep 2024 18:51:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 19:16:13 +0000   Tue, 10 Sep 2024 18:51:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 19:16:13 +0000   Tue, 10 Sep 2024 18:51:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 19:16:13 +0000   Tue, 10 Sep 2024 19:00:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    embed-certs-836868
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fad7b25219ca42019c13ea149c801dc4
	  System UUID:                fad7b252-19ca-4201-9c13-ea149c801dc4
	  Boot ID:                    3e25c5c7-bde2-4e61-a1b9-143b7664c1e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-mt78p                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-embed-certs-836868                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-836868             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-836868    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-4fddv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-embed-certs-836868             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-26knw               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-836868 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-836868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-836868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-836868 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-836868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-836868 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-836868 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-836868 event: Registered Node embed-certs-836868 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-836868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-836868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-836868 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-836868 event: Registered Node embed-certs-836868 in Controller
	
	
	==> dmesg <==
	[Sep10 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053419] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041894] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.146357] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep10 19:00] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.614729] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.955578] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.061186] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055959] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.205677] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.128952] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.285168] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.050039] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +1.990525] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +0.070121] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.518047] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.452033] systemd-fstab-generator[1540]: Ignoring "noauto" option for root device
	[  +3.276561] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.242236] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] <==
	{"level":"info","ts":"2024-09-10T19:00:20.724628Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T19:00:20.720649Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-10T19:00:20.725631Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-09-10T19:00:22.029943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-10T19:00:22.030005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-10T19:00:22.030038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2024-09-10T19:00:22.030051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2024-09-10T19:00:22.030057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-09-10T19:00:22.030072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2024-09-10T19:00:22.030080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-09-10T19:00:22.032684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:00:22.032631Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:embed-certs-836868 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T19:00:22.033624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:00:22.033808Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T19:00:22.034097Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T19:00:22.034144Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T19:00:22.034450Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T19:00:22.034808Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T19:00:22.035789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.107:2379"}
	{"level":"info","ts":"2024-09-10T19:10:22.060974Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":852}
	{"level":"info","ts":"2024-09-10T19:10:22.070691Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":852,"took":"9.260192ms","hash":2496125384,"current-db-size-bytes":2703360,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2703360,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-10T19:10:22.070746Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2496125384,"revision":852,"compact-revision":-1}
	{"level":"info","ts":"2024-09-10T19:15:22.070833Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1094}
	{"level":"info","ts":"2024-09-10T19:15:22.074880Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1094,"took":"3.693906ms","hash":78274641,"current-db-size-bytes":2703360,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1679360,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-10T19:15:22.074936Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":78274641,"revision":1094,"compact-revision":852}
	
	
	==> kernel <==
	 19:20:17 up 20 min,  0 users,  load average: 0.26, 0.14, 0.10
	Linux embed-certs-836868 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] <==
	W0910 19:15:24.322345       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:15:24.322562       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0910 19:15:24.323646       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:15:24.323686       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:16:24.324272       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:16:24.324354       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0910 19:16:24.324306       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:16:24.324428       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0910 19:16:24.325668       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:16:24.325701       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0910 19:18:24.326659       1 handler_proxy.go:99] no RequestInfo found in the context
	W0910 19:18:24.327227       1 handler_proxy.go:99] no RequestInfo found in the context
	E0910 19:18:24.327311       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0910 19:18:24.327316       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0910 19:18:24.329615       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0910 19:18:24.329656       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] <==
	E0910 19:14:56.997205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:14:57.565454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:15:27.003174       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:15:27.573235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:15:57.009314       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:15:57.580817       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:16:13.014280       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-836868"
	E0910 19:16:27.017285       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:16:27.588361       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:16:53.654228       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="222.999µs"
	E0910 19:16:57.024266       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:16:57.596409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0910 19:17:04.663963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="55.092µs"
	E0910 19:17:27.030231       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:17:27.603794       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:17:57.036602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:17:57.613797       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:18:27.042208       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:18:27.622208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:18:57.050442       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:18:57.629882       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:19:27.059653       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:19:27.640171       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0910 19:19:57.066256       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0910 19:19:57.649826       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 19:00:24.224238       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 19:00:24.236833       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	E0910 19:00:24.236914       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 19:00:24.284599       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 19:00:24.284710       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 19:00:24.284816       1 server_linux.go:169] "Using iptables Proxier"
	I0910 19:00:24.293901       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 19:00:24.294210       1 server.go:483] "Version info" version="v1.31.0"
	I0910 19:00:24.294587       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:00:24.296547       1 config.go:197] "Starting service config controller"
	I0910 19:00:24.296706       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 19:00:24.296837       1 config.go:104] "Starting endpoint slice config controller"
	I0910 19:00:24.296868       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 19:00:24.297441       1 config.go:326] "Starting node config controller"
	I0910 19:00:24.297852       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 19:00:24.397380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 19:00:24.397445       1 shared_informer.go:320] Caches are synced for service config
	I0910 19:00:24.398821       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] <==
	I0910 19:00:21.147129       1 serving.go:386] Generated self-signed cert in-memory
	W0910 19:00:23.305380       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 19:00:23.305626       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 19:00:23.305728       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 19:00:23.305760       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 19:00:23.351208       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0910 19:00:23.351294       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 19:00:23.353390       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0910 19:00:23.353609       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 19:00:23.353657       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 19:00:23.353691       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0910 19:00:23.453867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 19:19:02 embed-certs-836868 kubelet[914]: E0910 19:19:02.637727     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:19:08 embed-certs-836868 kubelet[914]: E0910 19:19:08.888838     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995948888229710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:08 embed-certs-836868 kubelet[914]: E0910 19:19:08.889173     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995948888229710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:14 embed-certs-836868 kubelet[914]: E0910 19:19:14.637839     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:19:18 embed-certs-836868 kubelet[914]: E0910 19:19:18.653775     914 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 10 19:19:18 embed-certs-836868 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 10 19:19:18 embed-certs-836868 kubelet[914]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 19:19:18 embed-certs-836868 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 19:19:18 embed-certs-836868 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 19:19:18 embed-certs-836868 kubelet[914]: E0910 19:19:18.890591     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995958890272237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:18 embed-certs-836868 kubelet[914]: E0910 19:19:18.890616     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995958890272237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:26 embed-certs-836868 kubelet[914]: E0910 19:19:26.638311     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:19:28 embed-certs-836868 kubelet[914]: E0910 19:19:28.892552     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995968891438758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:28 embed-certs-836868 kubelet[914]: E0910 19:19:28.892577     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995968891438758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:38 embed-certs-836868 kubelet[914]: E0910 19:19:38.894611     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995978893644886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:38 embed-certs-836868 kubelet[914]: E0910 19:19:38.894652     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995978893644886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:41 embed-certs-836868 kubelet[914]: E0910 19:19:41.639187     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:19:48 embed-certs-836868 kubelet[914]: E0910 19:19:48.895931     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995988895594228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:48 embed-certs-836868 kubelet[914]: E0910 19:19:48.895963     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995988895594228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:55 embed-certs-836868 kubelet[914]: E0910 19:19:55.637683     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	Sep 10 19:19:58 embed-certs-836868 kubelet[914]: E0910 19:19:58.900599     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995998899991400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:19:58 embed-certs-836868 kubelet[914]: E0910 19:19:58.900627     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995998899991400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:08 embed-certs-836868 kubelet[914]: E0910 19:20:08.901987     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996008901747366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:08 embed-certs-836868 kubelet[914]: E0910 19:20:08.902033     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725996008901747366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 10 19:20:09 embed-certs-836868 kubelet[914]: E0910 19:20:09.637605     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-26knw" podUID="fdf89bfa-f2b6-4dc4-9279-ed75c1256494"
	
	
	==> storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] <==
	I0910 19:00:54.952708       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 19:00:54.964116       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 19:00:54.964237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 19:01:12.361687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 19:01:12.361936       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-836868_faadb906-17fd-49ac-9744-22e8f8266142!
	I0910 19:01:12.362797       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3df91385-4ac8-4599-b951-2ed815b06ad9", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-836868_faadb906-17fd-49ac-9744-22e8f8266142 became leader
	I0910 19:01:12.462618       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-836868_faadb906-17fd-49ac-9744-22e8f8266142!
	
	
	==> storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] <==
	I0910 19:00:24.144024       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0910 19:00:54.146804       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-836868 -n embed-certs-836868
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-836868 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-26knw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-836868 describe pod metrics-server-6867b74b74-26knw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-836868 describe pod metrics-server-6867b74b74-26knw: exit status 1 (64.635494ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-26knw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-836868 describe pod metrics-server-6867b74b74-26knw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (384.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (126.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:17:15.663209   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:17:55.870655   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:18:43.065859   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0910 19:18:56.538399   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 2 (226.814555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-432422" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-432422 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-432422 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.151µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-432422 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 2 (221.395998ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-432422 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-432422 logs -n 25: (1.560355395s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-642043 sudo cat                              | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo                                  | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo find                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-642043 sudo crio                             | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-642043                                       | bridge-642043                | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-186737 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | disable-driver-mounts-186737                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-836868            | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-347802             | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:51 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-557504  | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC | 10 Sep 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-836868                 | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-836868                                  | embed-certs-836868           | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-432422        | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-347802                  | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-347802                                   | no-preload-347802            | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-557504       | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-557504 | jenkins | v1.34.0 | 10 Sep 24 18:54 UTC | 10 Sep 24 19:04 UTC |
	|         | default-k8s-diff-port-557504                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-432422             | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC | 10 Sep 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-432422                              | old-k8s-version-432422       | jenkins | v1.34.0 | 10 Sep 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 18:56:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 18:56:02.487676   72122 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:56:02.487789   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487799   72122 out.go:358] Setting ErrFile to fd 2...
	I0910 18:56:02.487804   72122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:56:02.487953   72122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:56:02.488491   72122 out.go:352] Setting JSON to false
	I0910 18:56:02.489572   72122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5914,"bootTime":1725988648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:56:02.489637   72122 start.go:139] virtualization: kvm guest
	I0910 18:56:02.491991   72122 out.go:177] * [old-k8s-version-432422] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:56:02.493117   72122 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:56:02.493113   72122 notify.go:220] Checking for updates...
	I0910 18:56:02.494213   72122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:56:02.495356   72122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:56:02.496370   72122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:56:02.497440   72122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:56:02.498703   72122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:56:02.500450   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:56:02.501100   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.501150   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.515836   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0910 18:56:02.516286   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.516787   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.516815   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.517116   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.517300   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.519092   72122 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0910 18:56:02.520121   72122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:56:02.520405   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:56:02.520436   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:56:02.534860   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I0910 18:56:02.535243   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:56:02.535688   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:56:02.535711   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:56:02.536004   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:56:02.536215   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:56:02.570682   72122 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 18:56:02.571710   72122 start.go:297] selected driver: kvm2
	I0910 18:56:02.571722   72122 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.571821   72122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:56:02.572465   72122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.572528   72122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 18:56:02.587001   72122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 18:56:02.587381   72122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 18:56:02.587417   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:56:02.587427   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:56:02.587471   72122 start.go:340] cluster config:
	{Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:56:02.587599   72122 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 18:56:02.589116   72122 out.go:177] * Starting "old-k8s-version-432422" primary control-plane node in "old-k8s-version-432422" cluster
	I0910 18:56:02.590155   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:56:02.590185   72122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 18:56:02.590194   72122 cache.go:56] Caching tarball of preloaded images
	I0910 18:56:02.590294   72122 preload.go:172] Found /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0910 18:56:02.590313   72122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0910 18:56:02.590415   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:56:02.590612   72122 start.go:360] acquireMachinesLock for old-k8s-version-432422: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:56:08.097313   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:11.169360   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:17.249255   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:20.321326   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:26.401359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:29.473351   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:35.553474   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:38.625322   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:44.705324   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:47.777408   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:53.857373   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:56:56.929356   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:03.009354   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:06.081346   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:12.161342   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:15.233363   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:21.313385   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:24.385281   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:30.465347   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:33.537368   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:39.617395   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:42.689359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:48.769334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:51.841388   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:57:57.921359   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:00.993375   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:07.073343   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:10.145433   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:16.225336   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:19.297345   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:25.377291   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:28.449365   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:34.529306   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:37.601300   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:43.681334   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:46.753328   71183 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.107:22: connect: no route to host
	I0910 18:58:49.757234   71529 start.go:364] duration metric: took 4m17.481092907s to acquireMachinesLock for "no-preload-347802"
	I0910 18:58:49.757299   71529 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:58:49.757316   71529 fix.go:54] fixHost starting: 
	I0910 18:58:49.757667   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:58:49.757694   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:58:49.772681   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I0910 18:58:49.773067   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:58:49.773498   71529 main.go:141] libmachine: Using API Version  1
	I0910 18:58:49.773518   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:58:49.773963   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:58:49.774127   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:58:49.774279   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 18:58:49.775704   71529 fix.go:112] recreateIfNeeded on no-preload-347802: state=Stopped err=<nil>
	I0910 18:58:49.775726   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	W0910 18:58:49.775886   71529 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:58:49.777669   71529 out.go:177] * Restarting existing kvm2 VM for "no-preload-347802" ...
	I0910 18:58:49.778739   71529 main.go:141] libmachine: (no-preload-347802) Calling .Start
	I0910 18:58:49.778882   71529 main.go:141] libmachine: (no-preload-347802) Ensuring networks are active...
	I0910 18:58:49.779509   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network default is active
	I0910 18:58:49.779824   71529 main.go:141] libmachine: (no-preload-347802) Ensuring network mk-no-preload-347802 is active
	I0910 18:58:49.780121   71529 main.go:141] libmachine: (no-preload-347802) Getting domain xml...
	I0910 18:58:49.780766   71529 main.go:141] libmachine: (no-preload-347802) Creating domain...
	I0910 18:58:50.967405   71529 main.go:141] libmachine: (no-preload-347802) Waiting to get IP...
	I0910 18:58:50.968284   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:50.968647   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:50.968726   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:50.968628   72707 retry.go:31] will retry after 197.094328ms: waiting for machine to come up
	I0910 18:58:51.167237   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.167630   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.167683   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.167603   72707 retry.go:31] will retry after 272.376855ms: waiting for machine to come up
	I0910 18:58:51.441212   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.441673   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.441698   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.441636   72707 retry.go:31] will retry after 458.172114ms: waiting for machine to come up
	I0910 18:58:51.900991   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:51.901464   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:51.901498   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:51.901428   72707 retry.go:31] will retry after 442.42629ms: waiting for machine to come up
	I0910 18:58:49.754913   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:58:49.754977   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755310   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 18:58:49.755335   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 18:58:49.755513   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 18:58:49.757052   71183 machine.go:96] duration metric: took 4m37.423474417s to provisionDockerMachine
	I0910 18:58:49.757138   71183 fix.go:56] duration metric: took 4m37.44458491s for fixHost
	I0910 18:58:49.757149   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 4m37.444613055s
	W0910 18:58:49.757173   71183 start.go:714] error starting host: provision: host is not running
	W0910 18:58:49.757263   71183 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0910 18:58:49.757273   71183 start.go:729] Will try again in 5 seconds ...
	I0910 18:58:52.345053   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:52.345519   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:52.345540   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:52.345463   72707 retry.go:31] will retry after 732.353971ms: waiting for machine to come up
	I0910 18:58:53.079229   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.079686   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.079714   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.079638   72707 retry.go:31] will retry after 658.057224ms: waiting for machine to come up
	I0910 18:58:53.739313   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:53.739750   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:53.739811   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:53.739732   72707 retry.go:31] will retry after 910.559952ms: waiting for machine to come up
	I0910 18:58:54.651714   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:54.652075   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:54.652099   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:54.652027   72707 retry.go:31] will retry after 1.410431493s: waiting for machine to come up
	I0910 18:58:56.063996   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:56.064396   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:56.064418   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:56.064360   72707 retry.go:31] will retry after 1.795467467s: waiting for machine to come up
	I0910 18:58:54.759533   71183 start.go:360] acquireMachinesLock for embed-certs-836868: {Name:mka8b4fee6f17f76c93c5b1ce15817bf0f00a352 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 18:58:57.862130   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:57.862484   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:57.862509   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:57.862453   72707 retry.go:31] will retry after 1.450403908s: waiting for machine to come up
	I0910 18:58:59.315197   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:58:59.315621   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:58:59.315657   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:58:59.315566   72707 retry.go:31] will retry after 1.81005281s: waiting for machine to come up
	I0910 18:59:01.128164   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:01.128611   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:01.128642   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:01.128563   72707 retry.go:31] will retry after 3.333505805s: waiting for machine to come up
	I0910 18:59:04.464526   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:04.465004   71529 main.go:141] libmachine: (no-preload-347802) DBG | unable to find current IP address of domain no-preload-347802 in network mk-no-preload-347802
	I0910 18:59:04.465030   71529 main.go:141] libmachine: (no-preload-347802) DBG | I0910 18:59:04.464951   72707 retry.go:31] will retry after 3.603817331s: waiting for machine to come up
	I0910 18:59:09.257584   71627 start.go:364] duration metric: took 4m27.770499275s to acquireMachinesLock for "default-k8s-diff-port-557504"
	I0910 18:59:09.257656   71627 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:09.257673   71627 fix.go:54] fixHost starting: 
	I0910 18:59:09.258100   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:09.258144   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:09.276230   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0910 18:59:09.276622   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:09.277129   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:09.277151   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:09.277489   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:09.277663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:09.277793   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:09.279006   71627 fix.go:112] recreateIfNeeded on default-k8s-diff-port-557504: state=Stopped err=<nil>
	I0910 18:59:09.279043   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	W0910 18:59:09.279178   71627 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:09.281106   71627 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-557504" ...
	I0910 18:59:08.073057   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073638   71529 main.go:141] libmachine: (no-preload-347802) Found IP for machine: 192.168.50.138
	I0910 18:59:08.073660   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has current primary IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.073666   71529 main.go:141] libmachine: (no-preload-347802) Reserving static IP address...
	I0910 18:59:08.074129   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.074153   71529 main.go:141] libmachine: (no-preload-347802) Reserved static IP address: 192.168.50.138
	I0910 18:59:08.074170   71529 main.go:141] libmachine: (no-preload-347802) DBG | skip adding static IP to network mk-no-preload-347802 - found existing host DHCP lease matching {name: "no-preload-347802", mac: "52:54:00:5b:b1:44", ip: "192.168.50.138"}
	I0910 18:59:08.074179   71529 main.go:141] libmachine: (no-preload-347802) Waiting for SSH to be available...
	I0910 18:59:08.074187   71529 main.go:141] libmachine: (no-preload-347802) DBG | Getting to WaitForSSH function...
	I0910 18:59:08.076434   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076744   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.076767   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.076928   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH client type: external
	I0910 18:59:08.076950   71529 main.go:141] libmachine: (no-preload-347802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa (-rw-------)
	I0910 18:59:08.076979   71529 main.go:141] libmachine: (no-preload-347802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:08.076992   71529 main.go:141] libmachine: (no-preload-347802) DBG | About to run SSH command:
	I0910 18:59:08.077029   71529 main.go:141] libmachine: (no-preload-347802) DBG | exit 0
	I0910 18:59:08.201181   71529 main.go:141] libmachine: (no-preload-347802) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:08.201561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetConfigRaw
	I0910 18:59:08.202195   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.204390   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204639   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.204676   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.204932   71529 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/config.json ...
	I0910 18:59:08.205227   71529 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:08.205245   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:08.205464   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.207451   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207833   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.207862   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.207956   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.208120   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.208402   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.208584   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.208811   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.208826   71529 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:08.317392   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:08.317421   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317693   71529 buildroot.go:166] provisioning hostname "no-preload-347802"
	I0910 18:59:08.317721   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.317870   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.320440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320749   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.320777   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.320922   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.321092   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321295   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.321433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.321607   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.321764   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.321778   71529 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-347802 && echo "no-preload-347802" | sudo tee /etc/hostname
	I0910 18:59:08.442907   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-347802
	
	I0910 18:59:08.442932   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.445449   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445743   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.445769   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.445930   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.446135   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446308   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.446461   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.446642   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.446831   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.446853   71529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-347802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-347802/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-347802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:08.561710   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:08.561738   71529 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:08.561760   71529 buildroot.go:174] setting up certificates
	I0910 18:59:08.561771   71529 provision.go:84] configureAuth start
	I0910 18:59:08.561782   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetMachineName
	I0910 18:59:08.562065   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:08.564917   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565296   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.565318   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.565468   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.567579   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567883   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.567909   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.567998   71529 provision.go:143] copyHostCerts
	I0910 18:59:08.568062   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:08.568074   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:08.568155   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:08.568259   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:08.568269   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:08.568297   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:08.568362   71529 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:08.568369   71529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:08.568398   71529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:08.568457   71529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.no-preload-347802 san=[127.0.0.1 192.168.50.138 localhost minikube no-preload-347802]
	I0910 18:59:08.635212   71529 provision.go:177] copyRemoteCerts
	I0910 18:59:08.635296   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:08.635321   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.637851   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638202   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.638227   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.638392   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.638561   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.638727   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.638850   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:08.723477   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:08.747854   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0910 18:59:08.770184   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:08.792105   71529 provision.go:87] duration metric: took 230.324534ms to configureAuth
	I0910 18:59:08.792125   71529 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:08.792306   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:08.792389   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:08.795139   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795414   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:08.795440   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:08.795580   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:08.795767   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.795931   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:08.796075   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:08.796201   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:08.796385   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:08.796404   71529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:09.021498   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:09.021530   71529 machine.go:96] duration metric: took 816.290576ms to provisionDockerMachine
	I0910 18:59:09.021540   71529 start.go:293] postStartSetup for "no-preload-347802" (driver="kvm2")
	I0910 18:59:09.021566   71529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:09.021587   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.021923   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:09.021951   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.024598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.024935   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.024965   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.025210   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.025416   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.025598   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.025747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.107986   71529 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:09.111947   71529 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:09.111967   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:09.112028   71529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:09.112098   71529 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:09.112184   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:09.121734   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:09.144116   71529 start.go:296] duration metric: took 122.562738ms for postStartSetup
	I0910 18:59:09.144159   71529 fix.go:56] duration metric: took 19.386851685s for fixHost
	I0910 18:59:09.144183   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.146816   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147237   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.147278   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.147396   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.147583   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147754   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.147886   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.148060   71529 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:09.148274   71529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0910 18:59:09.148285   71529 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:09.257433   71529 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994749.232014074
	
	I0910 18:59:09.257456   71529 fix.go:216] guest clock: 1725994749.232014074
	I0910 18:59:09.257463   71529 fix.go:229] Guest: 2024-09-10 18:59:09.232014074 +0000 UTC Remote: 2024-09-10 18:59:09.144164668 +0000 UTC m=+277.006797443 (delta=87.849406ms)
	I0910 18:59:09.257478   71529 fix.go:200] guest clock delta is within tolerance: 87.849406ms
	I0910 18:59:09.257491   71529 start.go:83] releasing machines lock for "no-preload-347802", held for 19.50021281s
	I0910 18:59:09.257522   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.257777   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:09.260357   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260690   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.260715   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.260895   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261369   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261545   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 18:59:09.261631   71529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:09.261681   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.261749   71529 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:09.261774   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 18:59:09.264296   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264598   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264630   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.264650   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.264907   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.264992   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:09.265020   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:09.265067   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265189   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 18:59:09.265266   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265342   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 18:59:09.265400   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.265470   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 18:59:09.265602   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 18:59:09.367236   71529 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:09.373255   71529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:09.513271   71529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:09.519091   71529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:09.519153   71529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:09.534617   71529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:09.534639   71529 start.go:495] detecting cgroup driver to use...
	I0910 18:59:09.534698   71529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:09.551186   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:09.565123   71529 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:09.565193   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:09.578892   71529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:09.592571   71529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:09.700953   71529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:09.831175   71529 docker.go:233] disabling docker service ...
	I0910 18:59:09.831245   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:09.845755   71529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:09.858961   71529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:10.008707   71529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:10.144588   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:10.158486   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:10.176399   71529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:10.176456   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.186448   71529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:10.186511   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.196600   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.206639   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.216913   71529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:10.227030   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.237962   71529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.255181   71529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:10.265618   71529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:10.275659   71529 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:10.275713   71529 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:10.288712   71529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:10.301886   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:10.415847   71529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:10.500738   71529 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:10.500829   71529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:10.506564   71529 start.go:563] Will wait 60s for crictl version
	I0910 18:59:10.506620   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.510639   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:10.553929   71529 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:10.554034   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.582508   71529 ssh_runner.go:195] Run: crio --version
	I0910 18:59:10.622516   71529 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:09.282182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Start
	I0910 18:59:09.282345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring networks are active...
	I0910 18:59:09.282958   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network default is active
	I0910 18:59:09.283450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Ensuring network mk-default-k8s-diff-port-557504 is active
	I0910 18:59:09.283810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Getting domain xml...
	I0910 18:59:09.284454   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Creating domain...
	I0910 18:59:10.513168   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting to get IP...
	I0910 18:59:10.514173   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.514681   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.514587   72843 retry.go:31] will retry after 228.672382ms: waiting for machine to come up
	I0910 18:59:10.745046   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745450   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:10.745508   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:10.745440   72843 retry.go:31] will retry after 329.196616ms: waiting for machine to come up
	I0910 18:59:11.075777   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076237   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.076269   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.076188   72843 retry.go:31] will retry after 317.98463ms: waiting for machine to come up
	I0910 18:59:10.623864   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetIP
	I0910 18:59:10.626709   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627042   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 18:59:10.627084   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 18:59:10.627336   71529 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:10.631579   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:10.644077   71529 kubeadm.go:883] updating cluster {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:10.644183   71529 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:10.644215   71529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:10.679225   71529 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:10.679247   71529 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:10.679332   71529 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.679346   71529 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.679384   71529 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0910 18:59:10.679395   71529 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.679472   71529 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.679371   71529 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.679336   71529 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.681147   71529 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.681183   71529 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.681196   71529 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0910 18:59:10.681163   71529 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.681189   71529 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.681232   71529 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:10.681304   71529 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.841312   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.848638   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.872351   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:10.875581   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:10.882457   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:10.894360   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0910 18:59:10.895305   71529 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0910 18:59:10.895341   71529 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:10.895379   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:10.898460   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:10.953614   71529 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0910 18:59:10.953659   71529 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:10.953706   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042770   71529 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0910 18:59:11.042837   71529 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0910 18:59:11.042862   71529 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.042873   71529 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042914   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.042820   71529 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0910 18:59:11.043065   71529 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.043097   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.129993   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.130090   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.130018   71529 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0910 18:59:11.130143   71529 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.130187   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.130189   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.130206   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.130271   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.239573   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.239626   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.241780   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.241795   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.241853   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.241883   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.360008   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0910 18:59:11.360027   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0910 18:59:11.360067   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.371623   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0910 18:59:11.371632   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0910 18:59:11.480504   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0910 18:59:11.480591   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0910 18:59:11.480615   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.480635   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0910 18:59:11.480725   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:11.488248   71529 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.510860   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0910 18:59:11.510950   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0910 18:59:11.510959   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:11.511032   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:11.514065   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0910 18:59:11.514136   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:11.555358   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0910 18:59:11.555425   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0910 18:59:11.555445   71529 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555465   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:11.555491   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0910 18:59:11.555497   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0910 18:59:11.578210   71529 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0910 18:59:11.578227   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0910 18:59:11.578258   71529 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:11.578273   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0910 18:59:11.578306   71529 ssh_runner.go:195] Run: which crictl
	I0910 18:59:11.578345   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0910 18:59:11.578310   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0910 18:59:11.395907   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396361   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.396389   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.396320   72843 retry.go:31] will retry after 511.273215ms: waiting for machine to come up
	I0910 18:59:11.909582   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910012   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:11.910041   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:11.909957   72843 retry.go:31] will retry after 712.801984ms: waiting for machine to come up
	I0910 18:59:12.624608   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625042   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:12.625083   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:12.625014   72843 retry.go:31] will retry after 873.57855ms: waiting for machine to come up
	I0910 18:59:13.499767   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500117   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:13.500144   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:13.500071   72843 retry.go:31] will retry after 1.180667971s: waiting for machine to come up
	I0910 18:59:14.682848   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683351   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:14.683381   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:14.683297   72843 retry.go:31] will retry after 1.211684184s: waiting for machine to come up
	I0910 18:59:15.896172   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896651   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:15.896679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:15.896597   72843 retry.go:31] will retry after 1.541313035s: waiting for machine to come up
	I0910 18:59:13.534642   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.978971061s)
	I0910 18:59:13.534680   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0910 18:59:13.534686   71529 ssh_runner.go:235] Completed: which crictl: (1.956359959s)
	I0910 18:59:13.534704   71529 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.534753   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:13.534754   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0910 18:59:13.580670   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.439293   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439652   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:17.439679   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:17.439607   72843 retry.go:31] will retry after 2.232253017s: waiting for machine to come up
	I0910 18:59:19.673727   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:19.674141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:19.674070   72843 retry.go:31] will retry after 2.324233118s: waiting for machine to come up
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.644871938s)
	I0910 18:59:17.225574   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690724664s)
	I0910 18:59:17.225647   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0910 18:59:17.225671   71529 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:17.225676   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:17.225702   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0910 18:59:19.705947   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.48021773s)
	I0910 18:59:19.705982   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0910 18:59:19.706006   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706045   71529 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.480359026s)
	I0910 18:59:19.706069   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0910 18:59:19.706098   71529 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 18:59:19.706176   71529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:21.666588   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960494926s)
	I0910 18:59:21.666623   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0910 18:59:21.666640   71529 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.960446302s)
	I0910 18:59:21.666648   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:21.666666   71529 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0910 18:59:21.666699   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0910 18:59:22.000591   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001014   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:22.001047   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:22.000951   72843 retry.go:31] will retry after 3.327224401s: waiting for machine to come up
	I0910 18:59:25.329967   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330414   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | unable to find current IP address of domain default-k8s-diff-port-557504 in network mk-default-k8s-diff-port-557504
	I0910 18:59:25.330445   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | I0910 18:59:25.330367   72843 retry.go:31] will retry after 3.45596573s: waiting for machine to come up
	I0910 18:59:23.216195   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.549470753s)
	I0910 18:59:23.216223   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0910 18:59:23.216243   71529 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:23.216286   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0910 18:59:25.077483   71529 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.861176975s)
	I0910 18:59:25.077515   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0910 18:59:25.077547   71529 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.077640   71529 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0910 18:59:25.919427   71529 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0910 18:59:25.919478   71529 cache_images.go:123] Successfully loaded all cached images
	I0910 18:59:25.919486   71529 cache_images.go:92] duration metric: took 15.240223152s to LoadCachedImages
	I0910 18:59:25.919502   71529 kubeadm.go:934] updating node { 192.168.50.138 8443 v1.31.0 crio true true} ...
	I0910 18:59:25.919622   71529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-347802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:25.919710   71529 ssh_runner.go:195] Run: crio config
	I0910 18:59:25.964461   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:25.964489   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:25.964509   71529 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:25.964535   71529 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-347802 NodeName:no-preload-347802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:25.964698   71529 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-347802"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:25.964780   71529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:25.975304   71529 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:25.975371   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:25.985124   71529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0910 18:59:26.003355   71529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:26.020117   71529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0910 18:59:26.037026   71529 ssh_runner.go:195] Run: grep 192.168.50.138	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:26.041140   71529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:26.053643   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:26.175281   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:26.193153   71529 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802 for IP: 192.168.50.138
	I0910 18:59:26.193181   71529 certs.go:194] generating shared ca certs ...
	I0910 18:59:26.193203   71529 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:26.193398   71529 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:26.193452   71529 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:26.193466   71529 certs.go:256] generating profile certs ...
	I0910 18:59:26.193582   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/client.key
	I0910 18:59:26.193664   71529 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key.93ff3787
	I0910 18:59:26.193722   71529 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key
	I0910 18:59:26.193871   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:26.193924   71529 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:26.193978   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:26.194026   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:26.194053   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:26.194083   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:26.194132   71529 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:26.194868   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:26.231957   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:26.280213   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:26.310722   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:26.347855   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0910 18:59:26.386495   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:26.411742   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:26.435728   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/no-preload-347802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 18:59:26.460305   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:26.484974   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:26.508782   71529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:26.531397   71529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:26.548219   71529 ssh_runner.go:195] Run: openssl version
	I0910 18:59:26.553969   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:26.564950   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569539   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.569594   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:26.575677   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:26.586342   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:26.606946   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611671   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.611720   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:26.617271   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:26.627833   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:26.638225   71529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642722   71529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.642759   71529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:26.648359   71529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:26.659003   71529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:26.663236   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:26.668896   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:26.674346   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:26.680028   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:26.685462   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:26.691097   71529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:26.696620   71529 kubeadm.go:392] StartCluster: {Name:no-preload-347802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-347802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:26.696704   71529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:26.696746   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.733823   71529 cri.go:89] found id: ""
	I0910 18:59:26.733883   71529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:26.744565   71529 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:26.744584   71529 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:26.744620   71529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:26.754754   71529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:26.755687   71529 kubeconfig.go:125] found "no-preload-347802" server: "https://192.168.50.138:8443"
	I0910 18:59:26.757732   71529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:26.767140   71529 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.138
	I0910 18:59:26.767167   71529 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:26.767180   71529 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:26.767235   71529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:26.805555   71529 cri.go:89] found id: ""
	I0910 18:59:26.805616   71529 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:26.822806   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:26.832434   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:26.832456   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:26.832499   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:26.841225   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:26.841288   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:26.850145   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:26.859016   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:26.859070   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:26.868806   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.877814   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:26.877867   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:26.886985   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:26.895859   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:26.895911   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:26.905600   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:26.915716   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:27.038963   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:30.202285   72122 start.go:364] duration metric: took 3m27.611616445s to acquireMachinesLock for "old-k8s-version-432422"
	I0910 18:59:30.202346   72122 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:30.202377   72122 fix.go:54] fixHost starting: 
	I0910 18:59:30.202807   72122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:30.202842   72122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:30.222440   72122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0910 18:59:30.222927   72122 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:30.223415   72122 main.go:141] libmachine: Using API Version  1
	I0910 18:59:30.223435   72122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:30.223748   72122 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:30.223905   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:30.224034   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetState
	I0910 18:59:30.225464   72122 fix.go:112] recreateIfNeeded on old-k8s-version-432422: state=Stopped err=<nil>
	I0910 18:59:30.225505   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	W0910 18:59:30.225655   72122 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:30.227698   72122 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-432422" ...
	I0910 18:59:28.790020   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790390   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Found IP for machine: 192.168.72.54
	I0910 18:59:28.790424   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has current primary IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.790435   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserving static IP address...
	I0910 18:59:28.790758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Reserved static IP address: 192.168.72.54
	I0910 18:59:28.790780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Waiting for SSH to be available...
	I0910 18:59:28.790811   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.790839   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | skip adding static IP to network mk-default-k8s-diff-port-557504 - found existing host DHCP lease matching {name: "default-k8s-diff-port-557504", mac: "52:54:00:19:b8:3d", ip: "192.168.72.54"}
	I0910 18:59:28.790856   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Getting to WaitForSSH function...
	I0910 18:59:28.792644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.792947   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.792978   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.793114   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH client type: external
	I0910 18:59:28.793135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa (-rw-------)
	I0910 18:59:28.793192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:28.793242   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | About to run SSH command:
	I0910 18:59:28.793272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | exit 0
	I0910 18:59:28.921644   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:28.921983   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetConfigRaw
	I0910 18:59:28.922663   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:28.925273   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925614   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.925639   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.925884   71627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/config.json ...
	I0910 18:59:28.926061   71627 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:28.926077   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:28.926272   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:28.928411   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928731   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:28.928758   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:28.928909   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:28.929096   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929249   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:28.929371   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:28.929552   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:28.929722   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:28.929732   71627 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:29.041454   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:29.041486   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041745   71627 buildroot.go:166] provisioning hostname "default-k8s-diff-port-557504"
	I0910 18:59:29.041766   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.041965   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.044784   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045141   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.045182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.045358   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.045528   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045705   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.045810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.045968   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.046158   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.046173   71627 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-557504 && echo "default-k8s-diff-port-557504" | sudo tee /etc/hostname
	I0910 18:59:29.180227   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-557504
	
	I0910 18:59:29.180257   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.182815   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183166   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.183200   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.183416   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.183612   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183779   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.183883   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.184053   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.184258   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.184276   71627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-557504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-557504/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-557504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:29.315908   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:29.315942   71627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:29.315981   71627 buildroot.go:174] setting up certificates
	I0910 18:59:29.315996   71627 provision.go:84] configureAuth start
	I0910 18:59:29.316013   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetMachineName
	I0910 18:59:29.316262   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:29.319207   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319580   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.319609   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.319780   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.321973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322318   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.322352   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.322499   71627 provision.go:143] copyHostCerts
	I0910 18:59:29.322564   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:29.322577   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:29.322647   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:29.322772   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:29.322786   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:29.322832   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:29.322938   71627 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:29.322951   71627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:29.322986   71627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:29.323065   71627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-557504 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-557504 localhost minikube]
	I0910 18:59:29.488131   71627 provision.go:177] copyRemoteCerts
	I0910 18:59:29.488187   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:29.488210   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.491095   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491441   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.491467   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.491666   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.491830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.491973   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.492123   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:29.584016   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:29.614749   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0910 18:59:29.646904   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:29.677788   71627 provision.go:87] duration metric: took 361.777725ms to configureAuth
	I0910 18:59:29.677820   71627 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:29.678048   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:29.678135   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.680932   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681372   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.681394   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.681674   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.681868   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.682175   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.682431   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:29.682638   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:29.682665   71627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:29.934027   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:29.934058   71627 machine.go:96] duration metric: took 1.007985288s to provisionDockerMachine
	I0910 18:59:29.934071   71627 start.go:293] postStartSetup for "default-k8s-diff-port-557504" (driver="kvm2")
	I0910 18:59:29.934084   71627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:29.934104   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:29.934415   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:29.934447   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:29.937552   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.937917   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:29.937948   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:29.938110   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:29.938315   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:29.938496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:29.938645   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.030842   71627 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:30.036158   71627 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:30.036180   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:30.036267   71627 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:30.036380   71627 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:30.036520   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:30.048860   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:30.075362   71627 start.go:296] duration metric: took 141.276186ms for postStartSetup
	I0910 18:59:30.075398   71627 fix.go:56] duration metric: took 20.817735357s for fixHost
	I0910 18:59:30.075421   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.078501   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.078996   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.079026   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.079195   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.079373   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079561   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.079704   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.079908   71627 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:30.080089   71627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I0910 18:59:30.080102   71627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:30.202112   71627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994770.178719125
	
	I0910 18:59:30.202139   71627 fix.go:216] guest clock: 1725994770.178719125
	I0910 18:59:30.202149   71627 fix.go:229] Guest: 2024-09-10 18:59:30.178719125 +0000 UTC Remote: 2024-09-10 18:59:30.075402937 +0000 UTC m=+288.723404352 (delta=103.316188ms)
	I0910 18:59:30.202175   71627 fix.go:200] guest clock delta is within tolerance: 103.316188ms
	I0910 18:59:30.202184   71627 start.go:83] releasing machines lock for "default-k8s-diff-port-557504", held for 20.944552577s
	I0910 18:59:30.202221   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.202522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:30.205728   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206068   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.206101   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.206267   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.206830   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207011   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:30.207100   71627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:30.207171   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.207378   71627 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:30.207399   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:30.209851   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210130   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210182   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210220   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210400   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210553   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210555   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:30.210625   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:30.210735   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:30.210785   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.210849   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:30.210949   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.211002   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:30.211132   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:30.317738   71627 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:30.325333   71627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:30.485483   71627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:30.492979   71627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:30.493064   71627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:30.518974   71627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:30.518998   71627 start.go:495] detecting cgroup driver to use...
	I0910 18:59:30.519192   71627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:30.539578   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:30.554986   71627 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:30.555045   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:30.570454   71627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:30.590125   71627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:30.738819   71627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:30.930750   71627 docker.go:233] disabling docker service ...
	I0910 18:59:30.930811   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:30.946226   71627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:30.961633   71627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:31.086069   71627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:31.208629   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:31.225988   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:31.248059   71627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 18:59:31.248127   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.260212   71627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:31.260296   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.271128   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.282002   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.296901   71627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:31.309739   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.325469   71627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.350404   71627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:31.366130   71627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:31.379206   71627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:31.379259   71627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:31.395015   71627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:31.406339   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:31.538783   71627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:31.656815   71627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:31.656886   71627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:31.665263   71627 start.go:563] Will wait 60s for crictl version
	I0910 18:59:31.665333   71627 ssh_runner.go:195] Run: which crictl
	I0910 18:59:31.670317   71627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:31.719549   71627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:31.719641   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.753801   71627 ssh_runner.go:195] Run: crio --version
	I0910 18:59:31.787092   71627 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 18:59:28.257536   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.218537615s)
	I0910 18:59:28.257562   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.451173   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.516432   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:28.605746   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:28.605823   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.106870   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.606340   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:29.623814   71529 api_server.go:72] duration metric: took 1.018071553s to wait for apiserver process to appear ...
	I0910 18:59:29.623842   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:29.623864   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:29.624282   71529 api_server.go:269] stopped: https://192.168.50.138:8443/healthz: Get "https://192.168.50.138:8443/healthz": dial tcp 192.168.50.138:8443: connect: connection refused
	I0910 18:59:30.124145   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:30.228896   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .Start
	I0910 18:59:30.229066   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring networks are active...
	I0910 18:59:30.229735   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network default is active
	I0910 18:59:30.230126   72122 main.go:141] libmachine: (old-k8s-version-432422) Ensuring network mk-old-k8s-version-432422 is active
	I0910 18:59:30.230559   72122 main.go:141] libmachine: (old-k8s-version-432422) Getting domain xml...
	I0910 18:59:30.231206   72122 main.go:141] libmachine: (old-k8s-version-432422) Creating domain...
	I0910 18:59:31.669616   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting to get IP...
	I0910 18:59:31.670682   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.671124   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.671225   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.671101   72995 retry.go:31] will retry after 285.109621ms: waiting for machine to come up
	I0910 18:59:31.957711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:31.958140   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:31.958169   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:31.958103   72995 retry.go:31] will retry after 306.703176ms: waiting for machine to come up
	I0910 18:59:32.266797   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.267299   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.267333   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.267226   72995 retry.go:31] will retry after 327.953362ms: waiting for machine to come up
	I0910 18:59:32.494151   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.494177   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.494193   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.550283   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:32.550317   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:32.624486   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:32.646548   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:32.646583   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.124697   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.139775   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.139814   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:33.623998   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:33.632392   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:33.632430   71529 api_server.go:103] status: https://192.168.50.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:34.123979   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 18:59:34.133552   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 18:59:34.143511   71529 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:34.143543   71529 api_server.go:131] duration metric: took 4.519693435s to wait for apiserver health ...
	I0910 18:59:34.143552   71529 cni.go:84] Creating CNI manager for ""
	I0910 18:59:34.143558   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:34.145562   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:31.788472   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetIP
	I0910 18:59:31.791698   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792063   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:31.792102   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:31.792342   71627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:31.798045   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:31.814552   71627 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:31.814718   71627 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 18:59:31.814775   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:31.863576   71627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 18:59:31.863655   71627 ssh_runner.go:195] Run: which lz4
	I0910 18:59:31.868776   71627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:31.874162   71627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:31.874194   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 18:59:33.358271   71627 crio.go:462] duration metric: took 1.489531006s to copy over tarball
	I0910 18:59:33.358356   71627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:35.759805   71627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.401424942s)
	I0910 18:59:35.759833   71627 crio.go:469] duration metric: took 2.401529016s to extract the tarball
	I0910 18:59:35.759842   71627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:35.797349   71627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:35.849544   71627 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 18:59:35.849571   71627 cache_images.go:84] Images are preloaded, skipping loading
	I0910 18:59:35.849583   71627 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.0 crio true true} ...
	I0910 18:59:35.849706   71627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-557504 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:35.849783   71627 ssh_runner.go:195] Run: crio config
	I0910 18:59:35.896486   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:35.896514   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:35.896534   71627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:35.896556   71627 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-557504 NodeName:default-k8s-diff-port-557504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 18:59:35.896707   71627 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-557504"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:35.896777   71627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 18:59:35.907249   71627 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:35.907337   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:35.917196   71627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0910 18:59:35.935072   71627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:35.953823   71627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0910 18:59:35.970728   71627 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:35.974648   71627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:35.986487   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:36.144443   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:36.164942   71627 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504 for IP: 192.168.72.54
	I0910 18:59:36.164972   71627 certs.go:194] generating shared ca certs ...
	I0910 18:59:36.164990   71627 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:36.165172   71627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:36.165242   71627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:36.165255   71627 certs.go:256] generating profile certs ...
	I0910 18:59:36.165382   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/client.key
	I0910 18:59:36.165460   71627 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key.5cc31a18
	I0910 18:59:36.165505   71627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key
	I0910 18:59:36.165640   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:36.165680   71627 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:36.165700   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:36.165733   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:36.165770   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:36.165803   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:36.165874   71627 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:36.166687   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:36.203302   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:36.230599   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:36.269735   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:36.311674   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0910 18:59:36.354614   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 18:59:36.379082   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:34.146903   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:34.163037   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:34.189830   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:34.200702   71529 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:34.200751   71529 system_pods.go:61] "coredns-6f6b679f8f-54rpl" [2e301d43-a54a-4836-abf8-a45f5bc15889] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:34.200762   71529 system_pods.go:61] "etcd-no-preload-347802" [0fdffb97-72c6-4588-9593-46bcbed0a9fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:34.200773   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [3cf5abac-1d94-4ee2-a962-9daad308ec8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:34.200782   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6769757d-57fd-46c8-8f78-d20f80e592d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:34.200788   71529 system_pods.go:61] "kube-proxy-7v9n8" [d01842ad-3dae-49e1-8570-db9bcf4d0afc] Running
	I0910 18:59:34.200797   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [20e59c6b-4387-4dd0-b242-78d107775275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:34.200804   71529 system_pods.go:61] "metrics-server-6867b74b74-w8rqv" [52535081-4503-4136-963d-6b2db6c0224e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:34.200809   71529 system_pods.go:61] "storage-provisioner" [9f7c0178-7194-4c73-95a4-5a3c0091f3ac] Running
	I0910 18:59:34.200816   71529 system_pods.go:74] duration metric: took 10.965409ms to wait for pod list to return data ...
	I0910 18:59:34.200857   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:34.204544   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:34.204568   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:34.204580   71529 node_conditions.go:105] duration metric: took 3.714534ms to run NodePressure ...
	I0910 18:59:34.204597   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:34.487106   71529 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491817   71529 kubeadm.go:739] kubelet initialised
	I0910 18:59:34.491838   71529 kubeadm.go:740] duration metric: took 4.708046ms waiting for restarted kubelet to initialise ...
	I0910 18:59:34.491845   71529 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:34.496604   71529 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.501535   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501553   71529 pod_ready.go:82] duration metric: took 4.927724ms for pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.501561   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "coredns-6f6b679f8f-54rpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.501567   71529 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.505473   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505491   71529 pod_ready.go:82] duration metric: took 3.917111ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.505499   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "etcd-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.505507   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:34.510025   71529 pod_ready.go:98] node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510043   71529 pod_ready.go:82] duration metric: took 4.522609ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:34.510050   71529 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-347802" hosting pod "kube-apiserver-no-preload-347802" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-347802" has status "Ready":"False"
	I0910 18:59:34.510056   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:36.519023   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:32.597017   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:32.597589   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:32.597616   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:32.597554   72995 retry.go:31] will retry after 448.654363ms: waiting for machine to come up
	I0910 18:59:33.048100   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.048559   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.048590   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.048478   72995 retry.go:31] will retry after 654.829574ms: waiting for machine to come up
	I0910 18:59:33.704902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:33.705446   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:33.705475   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:33.705363   72995 retry.go:31] will retry after 610.514078ms: waiting for machine to come up
	I0910 18:59:34.316978   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:34.317481   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:34.317503   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:34.317430   72995 retry.go:31] will retry after 1.125805817s: waiting for machine to come up
	I0910 18:59:35.444880   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:35.445369   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:35.445394   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:35.445312   72995 retry.go:31] will retry after 1.484426931s: waiting for machine to come up
	I0910 18:59:36.931028   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:36.931568   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:36.931613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:36.931524   72995 retry.go:31] will retry after 1.819998768s: waiting for machine to come up
	I0910 18:59:36.403353   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/default-k8s-diff-port-557504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:36.427345   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:36.452765   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:36.485795   71627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:36.512944   71627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:36.532454   71627 ssh_runner.go:195] Run: openssl version
	I0910 18:59:36.538449   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:36.550806   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555761   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.555819   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:36.562430   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:36.573730   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:36.584987   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589551   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.589615   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:36.595496   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:36.607821   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:36.620298   71627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624888   71627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.624939   71627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:36.630534   71627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:36.641657   71627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:36.646317   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:36.652748   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:36.661166   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:36.670240   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:36.676776   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:36.686442   71627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:36.693233   71627 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-557504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-557504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:36.693351   71627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:36.693414   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.743159   71627 cri.go:89] found id: ""
	I0910 18:59:36.743256   71627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:36.754428   71627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:36.754451   71627 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:36.754505   71627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:36.765126   71627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:36.766213   71627 kubeconfig.go:125] found "default-k8s-diff-port-557504" server: "https://192.168.72.54:8444"
	I0910 18:59:36.768428   71627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:36.778678   71627 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I0910 18:59:36.778715   71627 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:36.778728   71627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:36.778779   71627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:36.824031   71627 cri.go:89] found id: ""
	I0910 18:59:36.824107   71627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:36.840585   71627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:36.851445   71627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:36.851462   71627 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:36.851508   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0910 18:59:36.860630   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:36.860682   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:36.869973   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0910 18:59:36.880034   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:36.880099   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:36.889684   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.898786   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:36.898870   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:36.908328   71627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0910 18:59:36.917272   71627 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:36.917334   71627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:36.928923   71627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:36.940238   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.079143   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:37.945317   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.157807   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.245283   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:38.353653   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 18:59:38.353746   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:38.854791   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.354743   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:59:39.409511   71627 api_server.go:72] duration metric: took 1.055855393s to wait for apiserver process to appear ...
	I0910 18:59:39.409543   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 18:59:39.409566   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.410104   71627 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I0910 18:59:39.909665   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:39.018802   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:41.517911   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:38.753463   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:38.754076   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:38.754107   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:38.754019   72995 retry.go:31] will retry after 2.258214375s: waiting for machine to come up
	I0910 18:59:41.013524   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:41.013988   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:41.014011   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:41.013910   72995 retry.go:31] will retry after 2.030553777s: waiting for machine to come up
	I0910 18:59:41.976133   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 18:59:41.976166   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 18:59:41.976179   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.080631   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.080674   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.409865   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.421093   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.421174   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:42.910272   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:42.914729   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 18:59:42.914757   71627 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 18:59:43.410280   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 18:59:43.414731   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 18:59:43.421135   71627 api_server.go:141] control plane version: v1.31.0
	I0910 18:59:43.421163   71627 api_server.go:131] duration metric: took 4.011612782s to wait for apiserver health ...
	I0910 18:59:43.421172   71627 cni.go:84] Creating CNI manager for ""
	I0910 18:59:43.421178   71627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:43.423063   71627 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 18:59:43.424278   71627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 18:59:43.434823   71627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 18:59:43.461604   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 18:59:43.477566   71627 system_pods.go:59] 8 kube-system pods found
	I0910 18:59:43.477592   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 18:59:43.477600   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 18:59:43.477606   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 18:59:43.477616   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 18:59:43.477623   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 18:59:43.477631   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 18:59:43.477638   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 18:59:43.477648   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 18:59:43.477658   71627 system_pods.go:74] duration metric: took 16.035701ms to wait for pod list to return data ...
	I0910 18:59:43.477673   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 18:59:43.485818   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 18:59:43.485840   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 18:59:43.485850   71627 node_conditions.go:105] duration metric: took 8.173642ms to run NodePressure ...
	I0910 18:59:43.485864   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:43.752422   71627 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756713   71627 kubeadm.go:739] kubelet initialised
	I0910 18:59:43.756735   71627 kubeadm.go:740] duration metric: took 4.285787ms waiting for restarted kubelet to initialise ...
	I0910 18:59:43.756744   71627 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:43.762384   71627 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.767080   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767099   71627 pod_ready.go:82] duration metric: took 4.695864ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.767109   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.767116   71627 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.772560   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772579   71627 pod_ready.go:82] duration metric: took 5.453737ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.772588   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.772593   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.776328   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776345   71627 pod_ready.go:82] duration metric: took 3.745149ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.776352   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.776357   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.865825   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865850   71627 pod_ready.go:82] duration metric: took 89.48636ms for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:43.865862   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:43.865868   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.264892   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264922   71627 pod_ready.go:82] duration metric: took 399.047611ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.264932   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-proxy-4t8r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.264938   71627 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:44.665376   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665402   71627 pod_ready.go:82] duration metric: took 400.457184ms for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:44.665413   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:44.665418   71627 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:45.065696   71627 pod_ready.go:98] node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065724   71627 pod_ready.go:82] duration metric: took 400.298527ms for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 18:59:45.065736   71627 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-557504" hosting pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:45.065743   71627 pod_ready.go:39] duration metric: took 1.308988307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:45.065759   71627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 18:59:45.077813   71627 ops.go:34] apiserver oom_adj: -16
	I0910 18:59:45.077838   71627 kubeadm.go:597] duration metric: took 8.323378955s to restartPrimaryControlPlane
	I0910 18:59:45.077846   71627 kubeadm.go:394] duration metric: took 8.384626167s to StartCluster
	I0910 18:59:45.077860   71627 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.077980   71627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:45.079979   71627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:45.080304   71627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 18:59:45.080399   71627 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 18:59:45.080478   71627 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080510   71627 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080506   71627 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-557504"
	W0910 18:59:45.080523   71627 addons.go:243] addon storage-provisioner should already be in state true
	I0910 18:59:45.080519   71627 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-557504"
	I0910 18:59:45.080553   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080568   71627 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:45.080568   71627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-557504"
	W0910 18:59:45.080582   71627 addons.go:243] addon metrics-server should already be in state true
	I0910 18:59:45.080529   71627 config.go:182] Loaded profile config "default-k8s-diff-port-557504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:59:45.080608   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.080906   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080932   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.080989   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.080994   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081015   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.081905   71627 out.go:177] * Verifying Kubernetes components...
	I0910 18:59:45.083206   71627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:45.096019   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0910 18:59:45.096288   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0910 18:59:45.096453   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096730   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.096984   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097012   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097243   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.097273   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.097401   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.097596   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.097678   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0910 18:59:45.097693   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.098049   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.098464   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.098504   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.099185   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.099207   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.099592   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.100125   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.100166   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.101159   71627 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-557504"
	W0910 18:59:45.101175   71627 addons.go:243] addon default-storageclass should already be in state true
	I0910 18:59:45.101203   71627 host.go:66] Checking if "default-k8s-diff-port-557504" exists ...
	I0910 18:59:45.101501   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.101537   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.114823   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0910 18:59:45.115253   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.115363   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0910 18:59:45.115737   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.115759   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.115795   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.116106   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.116244   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.116270   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.116289   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.116696   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.117290   71627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:45.117327   71627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:45.117546   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0910 18:59:45.117879   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.118496   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.118631   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.118643   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.118949   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.119107   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.120353   71627 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 18:59:45.120775   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.121685   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 18:59:45.121699   71627 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 18:59:45.121718   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.122500   71627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:45.123762   71627 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.123778   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 18:59:45.123792   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.125345   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.125926   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.126161   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.126357   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.125943   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.126548   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.126661   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.127075   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127507   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.127522   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.127675   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.127810   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.127905   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.127997   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.132978   71627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0910 18:59:45.133303   71627 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:45.133757   71627 main.go:141] libmachine: Using API Version  1
	I0910 18:59:45.133779   71627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:45.134043   71627 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:45.134188   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetState
	I0910 18:59:45.135712   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .DriverName
	I0910 18:59:45.135917   71627 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.135928   71627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 18:59:45.135938   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHHostname
	I0910 18:59:45.138375   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138616   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:b8:3d", ip: ""} in network mk-default-k8s-diff-port-557504: {Iface:virbr4 ExpiryTime:2024-09-10 19:51:22 +0000 UTC Type:0 Mac:52:54:00:19:b8:3d Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-557504 Clientid:01:52:54:00:19:b8:3d}
	I0910 18:59:45.138629   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | domain default-k8s-diff-port-557504 has defined IP address 192.168.72.54 and MAC address 52:54:00:19:b8:3d in network mk-default-k8s-diff-port-557504
	I0910 18:59:45.138768   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHPort
	I0910 18:59:45.138937   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHKeyPath
	I0910 18:59:45.139054   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .GetSSHUsername
	I0910 18:59:45.139181   71627 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/default-k8s-diff-port-557504/id_rsa Username:docker}
	I0910 18:59:45.293036   71627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:45.311747   71627 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:45.425820   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 18:59:45.425852   71627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 18:59:45.430783   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 18:59:45.441452   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 18:59:45.481245   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 18:59:45.481268   71627 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 18:59:45.573348   71627 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:45.573373   71627 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 18:59:45.634830   71627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 18:59:46.589194   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147713188s)
	I0910 18:59:46.589253   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589266   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589284   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589311   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589321   71627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158508631s)
	I0910 18:59:46.589343   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589355   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589700   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589723   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589729   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589730   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.589736   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589738   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589741   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589751   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.589752   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589761   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589774   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589816   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589755   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.589852   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.589961   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.589971   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.590192   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) DBG | Closing plugin on server side
	I0910 18:59:46.590207   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.590220   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591675   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.591692   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.591702   71627 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-557504"
	I0910 18:59:46.595906   71627 main.go:141] libmachine: Making call to close driver server
	I0910 18:59:46.595921   71627 main.go:141] libmachine: (default-k8s-diff-port-557504) Calling .Close
	I0910 18:59:46.596105   71627 main.go:141] libmachine: Successfully made call to close driver server
	I0910 18:59:46.596126   71627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 18:59:46.598033   71627 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0910 18:59:44.023282   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:46.516768   71529 pod_ready.go:103] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:47.016400   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.016423   71529 pod_ready.go:82] duration metric: took 12.506359172s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.016435   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020809   71529 pod_ready.go:93] pod "kube-proxy-7v9n8" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:47.020827   71529 pod_ready.go:82] duration metric: took 4.386051ms for pod "kube-proxy-7v9n8" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:47.020836   71529 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:43.046937   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:43.047363   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:43.047393   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:43.047314   72995 retry.go:31] will retry after 2.233047134s: waiting for machine to come up
	I0910 18:59:45.282610   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:45.283104   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | unable to find current IP address of domain old-k8s-version-432422 in network mk-old-k8s-version-432422
	I0910 18:59:45.283133   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | I0910 18:59:45.283026   72995 retry.go:31] will retry after 4.238676711s: waiting for machine to come up
	I0910 18:59:51.182133   71183 start.go:364] duration metric: took 56.422548201s to acquireMachinesLock for "embed-certs-836868"
	I0910 18:59:51.182195   71183 start.go:96] Skipping create...Using existing machine configuration
	I0910 18:59:51.182206   71183 fix.go:54] fixHost starting: 
	I0910 18:59:51.182600   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:59:51.182637   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:59:51.198943   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0910 18:59:51.199345   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:59:51.199803   71183 main.go:141] libmachine: Using API Version  1
	I0910 18:59:51.199828   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:59:51.200153   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:59:51.200364   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 18:59:51.200493   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 18:59:51.202100   71183 fix.go:112] recreateIfNeeded on embed-certs-836868: state=Stopped err=<nil>
	I0910 18:59:51.202123   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	W0910 18:59:51.202286   71183 fix.go:138] unexpected machine state, will restart: <nil>
	I0910 18:59:51.204028   71183 out.go:177] * Restarting existing kvm2 VM for "embed-certs-836868" ...
	I0910 18:59:46.599125   71627 addons.go:510] duration metric: took 1.518742666s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0910 18:59:47.316003   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.316691   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:49.027374   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:49.027393   71529 pod_ready.go:82] duration metric: took 2.006551523s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:49.027403   71529 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:51.034568   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:51.205180   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Start
	I0910 18:59:51.205332   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring networks are active...
	I0910 18:59:51.205952   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network default is active
	I0910 18:59:51.206322   71183 main.go:141] libmachine: (embed-certs-836868) Ensuring network mk-embed-certs-836868 is active
	I0910 18:59:51.206717   71183 main.go:141] libmachine: (embed-certs-836868) Getting domain xml...
	I0910 18:59:51.207430   71183 main.go:141] libmachine: (embed-certs-836868) Creating domain...
	I0910 18:59:49.526000   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.526536   72122 main.go:141] libmachine: (old-k8s-version-432422) Found IP for machine: 192.168.61.51
	I0910 18:59:49.526558   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserving static IP address...
	I0910 18:59:49.526569   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has current primary IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.527018   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.527063   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | skip adding static IP to network mk-old-k8s-version-432422 - found existing host DHCP lease matching {name: "old-k8s-version-432422", mac: "52:54:00:65:ab:4d", ip: "192.168.61.51"}
	I0910 18:59:49.527084   72122 main.go:141] libmachine: (old-k8s-version-432422) Reserved static IP address: 192.168.61.51
	I0910 18:59:49.527099   72122 main.go:141] libmachine: (old-k8s-version-432422) Waiting for SSH to be available...
	I0910 18:59:49.527113   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Getting to WaitForSSH function...
	I0910 18:59:49.529544   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.529962   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.529987   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.530143   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH client type: external
	I0910 18:59:49.530170   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa (-rw-------)
	I0910 18:59:49.530195   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 18:59:49.530208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | About to run SSH command:
	I0910 18:59:49.530245   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | exit 0
	I0910 18:59:49.656944   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | SSH cmd err, output: <nil>: 
	I0910 18:59:49.657307   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetConfigRaw
	I0910 18:59:49.657926   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:49.660332   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660689   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.660711   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.660992   72122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/config.json ...
	I0910 18:59:49.661238   72122 machine.go:93] provisionDockerMachine start ...
	I0910 18:59:49.661259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:49.661480   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.663824   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664208   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.664236   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.664370   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.664565   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664712   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.664887   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.665103   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.665392   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.665406   72122 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 18:59:49.769433   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 18:59:49.769468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769716   72122 buildroot.go:166] provisioning hostname "old-k8s-version-432422"
	I0910 18:59:49.769740   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:49.769918   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.772324   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772710   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.772736   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.772875   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.773061   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773245   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.773384   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.773554   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.773751   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.773764   72122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-432422 && echo "old-k8s-version-432422" | sudo tee /etc/hostname
	I0910 18:59:49.891230   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-432422
	
	I0910 18:59:49.891259   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:49.894272   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894641   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:49.894683   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:49.894820   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:49.894983   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:49.895210   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:49.895330   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:49.895540   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:49.895559   72122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-432422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-432422/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-432422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 18:59:50.011767   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 18:59:50.011795   72122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 18:59:50.011843   72122 buildroot.go:174] setting up certificates
	I0910 18:59:50.011854   72122 provision.go:84] configureAuth start
	I0910 18:59:50.011866   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetMachineName
	I0910 18:59:50.012185   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:50.014947   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015352   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.015388   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.015549   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.017712   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018002   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.018036   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.018193   72122 provision.go:143] copyHostCerts
	I0910 18:59:50.018251   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 18:59:50.018265   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 18:59:50.018337   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 18:59:50.018481   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 18:59:50.018491   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 18:59:50.018513   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 18:59:50.018585   72122 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 18:59:50.018594   72122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 18:59:50.018612   72122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 18:59:50.018667   72122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-432422 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-432422]
	I0910 18:59:50.528798   72122 provision.go:177] copyRemoteCerts
	I0910 18:59:50.528864   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 18:59:50.528900   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.532154   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532576   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.532613   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.532765   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.532995   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.533205   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.533370   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:50.620169   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0910 18:59:50.647163   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0910 18:59:50.679214   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 18:59:50.704333   72122 provision.go:87] duration metric: took 692.46607ms to configureAuth
	I0910 18:59:50.704360   72122 buildroot.go:189] setting minikube options for container-runtime
	I0910 18:59:50.704545   72122 config.go:182] Loaded profile config "old-k8s-version-432422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:59:50.704639   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.707529   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.707903   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.707931   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.708082   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.708297   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708463   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.708641   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.708786   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:50.708954   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:50.708969   72122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 18:59:50.935375   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 18:59:50.935403   72122 machine.go:96] duration metric: took 1.274152353s to provisionDockerMachine
	I0910 18:59:50.935414   72122 start.go:293] postStartSetup for "old-k8s-version-432422" (driver="kvm2")
	I0910 18:59:50.935424   72122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 18:59:50.935448   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:50.935763   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 18:59:50.935796   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:50.938507   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.938865   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:50.938902   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:50.939008   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:50.939198   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:50.939529   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:50.939689   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.024726   72122 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 18:59:51.029522   72122 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 18:59:51.029547   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 18:59:51.029632   72122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 18:59:51.029734   72122 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 18:59:51.029848   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 18:59:51.042454   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:51.068748   72122 start.go:296] duration metric: took 133.318275ms for postStartSetup
	I0910 18:59:51.068792   72122 fix.go:56] duration metric: took 20.866428313s for fixHost
	I0910 18:59:51.068816   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.071533   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.071894   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.071921   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.072072   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.072264   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072468   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.072616   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.072784   72122 main.go:141] libmachine: Using SSH client type: native
	I0910 18:59:51.072938   72122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0910 18:59:51.072948   72122 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 18:59:51.181996   72122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994791.151610055
	
	I0910 18:59:51.182016   72122 fix.go:216] guest clock: 1725994791.151610055
	I0910 18:59:51.182024   72122 fix.go:229] Guest: 2024-09-10 18:59:51.151610055 +0000 UTC Remote: 2024-09-10 18:59:51.068796263 +0000 UTC m=+228.614166738 (delta=82.813792ms)
	I0910 18:59:51.182048   72122 fix.go:200] guest clock delta is within tolerance: 82.813792ms
	I0910 18:59:51.182055   72122 start.go:83] releasing machines lock for "old-k8s-version-432422", held for 20.979733564s
	I0910 18:59:51.182094   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.182331   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:51.184857   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185183   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.185212   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.185346   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.185840   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186006   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .DriverName
	I0910 18:59:51.186079   72122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 18:59:51.186143   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.186215   72122 ssh_runner.go:195] Run: cat /version.json
	I0910 18:59:51.186238   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHHostname
	I0910 18:59:51.189304   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189674   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.189698   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189765   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.189879   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190057   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190212   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190230   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:51.190255   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:51.190358   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.190470   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHPort
	I0910 18:59:51.190652   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHKeyPath
	I0910 18:59:51.190817   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetSSHUsername
	I0910 18:59:51.190948   72122 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/old-k8s-version-432422/id_rsa Username:docker}
	I0910 18:59:51.296968   72122 ssh_runner.go:195] Run: systemctl --version
	I0910 18:59:51.303144   72122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 18:59:51.447027   72122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 18:59:51.454963   72122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 18:59:51.455032   72122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 18:59:51.474857   72122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 18:59:51.474882   72122 start.go:495] detecting cgroup driver to use...
	I0910 18:59:51.474957   72122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 18:59:51.490457   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 18:59:51.504502   72122 docker.go:217] disabling cri-docker service (if available) ...
	I0910 18:59:51.504569   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 18:59:51.523331   72122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 18:59:51.543438   72122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 18:59:51.678734   72122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 18:59:51.831736   72122 docker.go:233] disabling docker service ...
	I0910 18:59:51.831804   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 18:59:51.846805   72122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 18:59:51.865771   72122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 18:59:52.012922   72122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 18:59:52.161595   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 18:59:52.180034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 18:59:52.200984   72122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0910 18:59:52.201041   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.211927   72122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 18:59:52.211989   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.223601   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.234211   72122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 18:59:52.246209   72122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 18:59:52.264079   72122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 18:59:52.277144   72122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 18:59:52.277204   72122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 18:59:52.292683   72122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 18:59:52.304601   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:52.421971   72122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 18:59:52.544386   72122 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 18:59:52.544459   72122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 18:59:52.551436   72122 start.go:563] Will wait 60s for crictl version
	I0910 18:59:52.551487   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:52.555614   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 18:59:52.598031   72122 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 18:59:52.598128   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.629578   72122 ssh_runner.go:195] Run: crio --version
	I0910 18:59:52.662403   72122 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0910 18:59:51.815436   71627 node_ready.go:53] node "default-k8s-diff-port-557504" has status "Ready":"False"
	I0910 18:59:52.816775   71627 node_ready.go:49] node "default-k8s-diff-port-557504" has status "Ready":"True"
	I0910 18:59:52.816809   71627 node_ready.go:38] duration metric: took 7.505015999s for node "default-k8s-diff-port-557504" to be "Ready" ...
	I0910 18:59:52.816821   71627 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 18:59:52.823528   71627 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829667   71627 pod_ready.go:93] pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.829688   71627 pod_ready.go:82] duration metric: took 6.135159ms for pod "coredns-6f6b679f8f-nq9fl" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.829696   71627 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833912   71627 pod_ready.go:93] pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.833933   71627 pod_ready.go:82] duration metric: took 4.231672ms for pod "etcd-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.833942   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838863   71627 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:52.838883   71627 pod_ready.go:82] duration metric: took 4.934379ms for pod "kube-apiserver-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:52.838897   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851413   71627 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:53.851437   71627 pod_ready.go:82] duration metric: took 1.012531075s for pod "kube-controller-manager-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.851447   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020886   71627 pod_ready.go:93] pod "kube-proxy-4t8r9" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:54.020910   71627 pod_ready.go:82] duration metric: took 169.456474ms for pod "kube-proxy-4t8r9" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:54.020926   71627 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217416   71627 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace has status "Ready":"True"
	I0910 18:59:55.217440   71627 pod_ready.go:82] duration metric: took 1.196506075s for pod "kube-scheduler-default-k8s-diff-port-557504" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:55.217451   71627 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	I0910 18:59:53.036769   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:55.536544   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:52.544041   71183 main.go:141] libmachine: (embed-certs-836868) Waiting to get IP...
	I0910 18:59:52.545001   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.545522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.545586   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.545494   73202 retry.go:31] will retry after 260.451431ms: waiting for machine to come up
	I0910 18:59:52.807914   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:52.808351   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:52.808377   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:52.808307   73202 retry.go:31] will retry after 340.526757ms: waiting for machine to come up
	I0910 18:59:53.150854   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.151446   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.151476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.151404   73202 retry.go:31] will retry after 470.620322ms: waiting for machine to come up
	I0910 18:59:53.624169   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:53.624709   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:53.624747   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:53.624657   73202 retry.go:31] will retry after 529.186273ms: waiting for machine to come up
	I0910 18:59:54.155156   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.155644   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.155673   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.155599   73202 retry.go:31] will retry after 575.877001ms: waiting for machine to come up
	I0910 18:59:54.733522   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:54.734049   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:54.734092   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:54.734000   73202 retry.go:31] will retry after 577.385946ms: waiting for machine to come up
	I0910 18:59:55.312705   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:55.313087   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:55.313114   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:55.313059   73202 retry.go:31] will retry after 735.788809ms: waiting for machine to come up
	I0910 18:59:56.049771   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:56.050272   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:56.050306   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:56.050224   73202 retry.go:31] will retry after 1.433431053s: waiting for machine to come up
	I0910 18:59:52.663465   72122 main.go:141] libmachine: (old-k8s-version-432422) Calling .GetIP
	I0910 18:59:52.666401   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.666796   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ab:4d", ip: ""} in network mk-old-k8s-version-432422: {Iface:virbr2 ExpiryTime:2024-09-10 19:59:42 +0000 UTC Type:0 Mac:52:54:00:65:ab:4d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-432422 Clientid:01:52:54:00:65:ab:4d}
	I0910 18:59:52.666843   72122 main.go:141] libmachine: (old-k8s-version-432422) DBG | domain old-k8s-version-432422 has defined IP address 192.168.61.51 and MAC address 52:54:00:65:ab:4d in network mk-old-k8s-version-432422
	I0910 18:59:52.667002   72122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0910 18:59:52.672338   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:52.688427   72122 kubeadm.go:883] updating cluster {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 18:59:52.688559   72122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 18:59:52.688623   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:52.740370   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:52.740447   72122 ssh_runner.go:195] Run: which lz4
	I0910 18:59:52.744925   72122 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 18:59:52.749840   72122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 18:59:52.749872   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0910 18:59:54.437031   72122 crio.go:462] duration metric: took 1.692132914s to copy over tarball
	I0910 18:59:54.437124   72122 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 18:59:57.462705   72122 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025545297s)
	I0910 18:59:57.462743   72122 crio.go:469] duration metric: took 3.025690485s to extract the tarball
	I0910 18:59:57.462753   72122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 18:59:57.223959   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:59.224657   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:01.224783   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:58.035610   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:00.535779   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 18:59:57.485417   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:57.485870   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:57.485896   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:57.485815   73202 retry.go:31] will retry after 1.638565814s: waiting for machine to come up
	I0910 18:59:59.126134   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 18:59:59.126625   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 18:59:59.126657   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 18:59:59.126576   73202 retry.go:31] will retry after 2.127929201s: waiting for machine to come up
	I0910 19:00:01.256121   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:01.256665   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:01.256694   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:01.256612   73202 retry.go:31] will retry after 2.530100505s: waiting for machine to come up
	I0910 18:59:57.508817   72122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 18:59:57.551327   72122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0910 18:59:57.551350   72122 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 18:59:57.551434   72122 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.551704   72122 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.551776   72122 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.552000   72122 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.551807   72122 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.551846   72122 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.551714   72122 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0910 18:59:57.551917   72122 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.553642   72122 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.553660   72122 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.553917   72122 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:57.553935   72122 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 18:59:57.554014   72122 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.554160   72122 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.554376   72122 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.554662   72122 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.726191   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.742799   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.745264   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.753214   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.768122   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.770828   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 18:59:57.774835   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.807657   72122 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0910 18:59:57.807693   72122 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.807733   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908662   72122 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0910 18:59:57.908678   72122 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0910 18:59:57.908707   72122 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.908711   72122 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.908759   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.908760   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920214   72122 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0910 18:59:57.920248   72122 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0910 18:59:57.920258   72122 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.920280   72122 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.920304   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.920313   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.937914   72122 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0910 18:59:57.937952   72122 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:57.937958   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:57.937999   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:57.938033   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:57.938006   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:57.938073   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:57.938063   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:57.938157   72122 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0910 18:59:57.938185   72122 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0910 18:59:57.938215   72122 ssh_runner.go:195] Run: which crictl
	I0910 18:59:58.044082   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.044139   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.044146   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.044173   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.045813   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.045816   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.045849   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.198804   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0910 18:59:58.198841   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0910 18:59:58.198881   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0910 18:59:58.198944   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.198978   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0910 18:59:58.199000   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0910 18:59:58.199081   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.353153   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0910 18:59:58.353217   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0910 18:59:58.353232   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0910 18:59:58.353277   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0910 18:59:58.359353   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0910 18:59:58.359363   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0910 18:59:58.359421   72122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0910 18:59:58.386872   72122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 18:59:58.407734   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0910 18:59:58.425479   72122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0910 18:59:58.553340   72122 cache_images.go:92] duration metric: took 1.001972084s to LoadCachedImages
	W0910 18:59:58.553438   72122 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19598-5973/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0910 18:59:58.553455   72122 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0910 18:59:58.553634   72122 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-432422 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 18:59:58.553722   72122 ssh_runner.go:195] Run: crio config
	I0910 18:59:58.605518   72122 cni.go:84] Creating CNI manager for ""
	I0910 18:59:58.605542   72122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 18:59:58.605554   72122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 18:59:58.605577   72122 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-432422 NodeName:old-k8s-version-432422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 18:59:58.605744   72122 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-432422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 18:59:58.605814   72122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0910 18:59:58.618033   72122 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 18:59:58.618096   72122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 18:59:58.629175   72122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0910 18:59:58.653830   72122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 18:59:58.679797   72122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0910 18:59:58.698692   72122 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0910 18:59:58.702565   72122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 18:59:58.715128   72122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 18:59:58.858262   72122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 18:59:58.876681   72122 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422 for IP: 192.168.61.51
	I0910 18:59:58.876719   72122 certs.go:194] generating shared ca certs ...
	I0910 18:59:58.876740   72122 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:58.876921   72122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 18:59:58.876983   72122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 18:59:58.876996   72122 certs.go:256] generating profile certs ...
	I0910 18:59:58.877129   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/client.key
	I0910 18:59:58.877210   72122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key.da6b542b
	I0910 18:59:58.877264   72122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key
	I0910 18:59:58.877424   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 18:59:58.877473   72122 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 18:59:58.877491   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 18:59:58.877528   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 18:59:58.877560   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 18:59:58.877591   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 18:59:58.877648   72122 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 18:59:58.878410   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 18:59:58.936013   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 18:59:58.969736   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 18:59:59.017414   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 18:59:59.063599   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0910 18:59:59.093934   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 18:59:59.138026   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 18:59:59.166507   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/old-k8s-version-432422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 18:59:59.196972   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 18:59:59.223596   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 18:59:59.250627   72122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 18:59:59.279886   72122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 18:59:59.300491   72122 ssh_runner.go:195] Run: openssl version
	I0910 18:59:59.306521   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 18:59:59.317238   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321625   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.321682   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 18:59:59.327532   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 18:59:59.339028   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 18:59:59.350578   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355025   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.355106   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 18:59:59.360701   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 18:59:59.375040   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 18:59:59.389867   72122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395829   72122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.395890   72122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 18:59:59.402425   72122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 18:59:59.414077   72122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 18:59:59.418909   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 18:59:59.425061   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 18:59:59.431213   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 18:59:59.437581   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 18:59:59.443603   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 18:59:59.449820   72122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 18:59:59.456100   72122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-432422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-432422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 18:59:59.456189   72122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 18:59:59.456234   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.497167   72122 cri.go:89] found id: ""
	I0910 18:59:59.497227   72122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 18:59:59.508449   72122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 18:59:59.508474   72122 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 18:59:59.508527   72122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 18:59:59.521416   72122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:59:59.522489   72122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-432422" does not appear in /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:59:59.523125   72122 kubeconfig.go:62] /home/jenkins/minikube-integration/19598-5973/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-432422" cluster setting kubeconfig missing "old-k8s-version-432422" context setting]
	I0910 18:59:59.524107   72122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 18:59:59.637793   72122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 18:59:59.651879   72122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0910 18:59:59.651916   72122 kubeadm.go:1160] stopping kube-system containers ...
	I0910 18:59:59.651930   72122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 18:59:59.651989   72122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 18:59:59.691857   72122 cri.go:89] found id: ""
	I0910 18:59:59.691922   72122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 18:59:59.708610   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 18:59:59.718680   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 18:59:59.718702   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 18:59:59.718755   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 18:59:59.729965   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 18:59:59.730028   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 18:59:59.740037   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 18:59:59.750640   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 18:59:59.750706   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 18:59:59.762436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.773456   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 18:59:59.773522   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 18:59:59.783438   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 18:59:59.792996   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 18:59:59.793056   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 18:59:59.805000   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 18:59:59.815384   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 18:59:59.955068   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:00.842403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.102530   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.212897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:01.340128   72122 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:01.340217   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:01.841004   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:02.340913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.225898   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.723882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.034295   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:05.034431   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:03.790275   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:03.790710   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:03.790736   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:03.790662   73202 retry.go:31] will retry after 3.202952028s: waiting for machine to come up
	I0910 19:00:06.995302   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:06.996124   71183 main.go:141] libmachine: (embed-certs-836868) DBG | unable to find current IP address of domain embed-certs-836868 in network mk-embed-certs-836868
	I0910 19:00:06.996149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | I0910 19:00:06.996073   73202 retry.go:31] will retry after 3.076425277s: waiting for machine to come up
	I0910 19:00:02.840935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.340938   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:03.840669   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.341213   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:04.841274   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.340698   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:05.841152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.340425   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:06.841001   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.341198   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:07.724121   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.223744   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:07.533428   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:09.534830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.033655   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:10.075125   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075606   71183 main.go:141] libmachine: (embed-certs-836868) Found IP for machine: 192.168.39.107
	I0910 19:00:10.075634   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has current primary IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.075643   71183 main.go:141] libmachine: (embed-certs-836868) Reserving static IP address...
	I0910 19:00:10.076046   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.076075   71183 main.go:141] libmachine: (embed-certs-836868) DBG | skip adding static IP to network mk-embed-certs-836868 - found existing host DHCP lease matching {name: "embed-certs-836868", mac: "52:54:00:24:ef:f2", ip: "192.168.39.107"}
	I0910 19:00:10.076103   71183 main.go:141] libmachine: (embed-certs-836868) Reserved static IP address: 192.168.39.107
	I0910 19:00:10.076122   71183 main.go:141] libmachine: (embed-certs-836868) Waiting for SSH to be available...
	I0910 19:00:10.076133   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Getting to WaitForSSH function...
	I0910 19:00:10.078039   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078327   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.078352   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.078452   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH client type: external
	I0910 19:00:10.078475   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa (-rw-------)
	I0910 19:00:10.078514   71183 main.go:141] libmachine: (embed-certs-836868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 19:00:10.078527   71183 main.go:141] libmachine: (embed-certs-836868) DBG | About to run SSH command:
	I0910 19:00:10.078548   71183 main.go:141] libmachine: (embed-certs-836868) DBG | exit 0
	I0910 19:00:10.201403   71183 main.go:141] libmachine: (embed-certs-836868) DBG | SSH cmd err, output: <nil>: 
	I0910 19:00:10.201748   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetConfigRaw
	I0910 19:00:10.202405   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.204760   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205130   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.205160   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.205408   71183 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/config.json ...
	I0910 19:00:10.205697   71183 machine.go:93] provisionDockerMachine start ...
	I0910 19:00:10.205714   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.205924   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.208095   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208394   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.208418   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.208534   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.208712   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208856   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.208958   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.209193   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.209412   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.209427   71183 main.go:141] libmachine: About to run SSH command:
	hostname
	I0910 19:00:10.313247   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0910 19:00:10.313278   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313556   71183 buildroot.go:166] provisioning hostname "embed-certs-836868"
	I0910 19:00:10.313584   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.313765   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.316135   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316569   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.316592   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.316739   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.316893   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317046   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.317165   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.317288   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.317490   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.317506   71183 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-836868 && echo "embed-certs-836868" | sudo tee /etc/hostname
	I0910 19:00:10.433585   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-836868
	
	I0910 19:00:10.433608   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.436076   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436407   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.436440   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.436627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.436826   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.436972   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.437146   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.437314   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.437480   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.437495   71183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-836868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-836868/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-836868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 19:00:10.546105   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 19:00:10.546146   71183 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5973/.minikube}
	I0910 19:00:10.546186   71183 buildroot.go:174] setting up certificates
	I0910 19:00:10.546197   71183 provision.go:84] configureAuth start
	I0910 19:00:10.546214   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetMachineName
	I0910 19:00:10.546485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:10.549236   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549567   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.549594   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.549696   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.551807   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552162   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.552195   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.552326   71183 provision.go:143] copyHostCerts
	I0910 19:00:10.552370   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem, removing ...
	I0910 19:00:10.552380   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem
	I0910 19:00:10.552435   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/cert.pem (1123 bytes)
	I0910 19:00:10.552559   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem, removing ...
	I0910 19:00:10.552568   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem
	I0910 19:00:10.552588   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/key.pem (1675 bytes)
	I0910 19:00:10.552646   71183 exec_runner.go:144] found /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem, removing ...
	I0910 19:00:10.552653   71183 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem
	I0910 19:00:10.552669   71183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5973/.minikube/ca.pem (1082 bytes)
	I0910 19:00:10.552714   71183 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem org=jenkins.embed-certs-836868 san=[127.0.0.1 192.168.39.107 embed-certs-836868 localhost minikube]
	I0910 19:00:10.610073   71183 provision.go:177] copyRemoteCerts
	I0910 19:00:10.610132   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 19:00:10.610153   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.612881   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613264   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.613301   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.613485   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.613695   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.613863   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.613980   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:10.695479   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0910 19:00:10.719380   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0910 19:00:10.744099   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 19:00:10.767849   71183 provision.go:87] duration metric: took 221.638443ms to configureAuth
	I0910 19:00:10.767873   71183 buildroot.go:189] setting minikube options for container-runtime
	I0910 19:00:10.768065   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:10.768150   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.770831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771149   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.771178   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.771338   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.771539   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771702   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.771825   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.771952   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:10.772106   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:10.772120   71183 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0910 19:00:10.992528   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0910 19:00:10.992568   71183 machine.go:96] duration metric: took 786.857321ms to provisionDockerMachine
	I0910 19:00:10.992583   71183 start.go:293] postStartSetup for "embed-certs-836868" (driver="kvm2")
	I0910 19:00:10.992598   71183 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 19:00:10.992630   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:10.992999   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 19:00:10.993030   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:10.995361   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995745   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:10.995777   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:10.995925   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:10.996100   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:10.996212   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:10.996375   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.079205   71183 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 19:00:11.083998   71183 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 19:00:11.084028   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/addons for local assets ...
	I0910 19:00:11.084089   71183 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5973/.minikube/files for local assets ...
	I0910 19:00:11.084158   71183 filesync.go:149] local asset: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem -> 131212.pem in /etc/ssl/certs
	I0910 19:00:11.084241   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 19:00:11.093150   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:11.116894   71183 start.go:296] duration metric: took 124.294668ms for postStartSetup
	I0910 19:00:11.116938   71183 fix.go:56] duration metric: took 19.934731446s for fixHost
	I0910 19:00:11.116962   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.119482   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119784   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.119821   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.119980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.120176   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120331   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.120501   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.120645   71183 main.go:141] libmachine: Using SSH client type: native
	I0910 19:00:11.120868   71183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0910 19:00:11.120883   71183 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 19:00:11.217542   71183 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725994811.172877822
	
	I0910 19:00:11.217570   71183 fix.go:216] guest clock: 1725994811.172877822
	I0910 19:00:11.217577   71183 fix.go:229] Guest: 2024-09-10 19:00:11.172877822 +0000 UTC Remote: 2024-09-10 19:00:11.116943488 +0000 UTC m=+358.948412200 (delta=55.934334ms)
	I0910 19:00:11.217603   71183 fix.go:200] guest clock delta is within tolerance: 55.934334ms
	I0910 19:00:11.217607   71183 start.go:83] releasing machines lock for "embed-certs-836868", held for 20.035440196s
	I0910 19:00:11.217627   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.217861   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:11.220855   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221282   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.221313   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.221533   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222074   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222277   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:11.222354   71183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 19:00:11.222402   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.222528   71183 ssh_runner.go:195] Run: cat /version.json
	I0910 19:00:11.222570   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:11.225205   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.225565   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225581   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.225753   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.225934   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226035   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:11.226062   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:11.226109   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226207   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:11.226283   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.226370   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:11.226535   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:11.226668   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:11.297642   71183 ssh_runner.go:195] Run: systemctl --version
	I0910 19:00:11.322486   71183 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0910 19:00:11.470402   71183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 19:00:11.477843   71183 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 19:00:11.477903   71183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 19:00:11.495518   71183 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 19:00:11.495542   71183 start.go:495] detecting cgroup driver to use...
	I0910 19:00:11.495597   71183 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 19:00:11.512467   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 19:00:11.526665   71183 docker.go:217] disabling cri-docker service (if available) ...
	I0910 19:00:11.526732   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0910 19:00:11.540445   71183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0910 19:00:11.554386   71183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0910 19:00:11.682012   71183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0910 19:00:11.846239   71183 docker.go:233] disabling docker service ...
	I0910 19:00:11.846303   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0910 19:00:11.860981   71183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0910 19:00:11.874271   71183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0910 19:00:12.005716   71183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0910 19:00:12.137151   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0910 19:00:12.151156   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 19:00:12.170086   71183 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0910 19:00:12.170150   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.180741   71183 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0910 19:00:12.180804   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.190933   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.200885   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:07.840772   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.341153   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:08.840737   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.340471   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:09.840262   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.340827   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:10.840645   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.340524   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:11.840521   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.340560   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:12.210950   71183 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 19:00:12.221730   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.232931   71183 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.251318   71183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0910 19:00:12.261473   71183 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 19:00:12.270818   71183 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0910 19:00:12.270873   71183 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0910 19:00:12.284581   71183 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 19:00:12.294214   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:12.424646   71183 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0910 19:00:12.517553   71183 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0910 19:00:12.517633   71183 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0910 19:00:12.522728   71183 start.go:563] Will wait 60s for crictl version
	I0910 19:00:12.522775   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:00:12.526754   71183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 19:00:12.569377   71183 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0910 19:00:12.569454   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.597783   71183 ssh_runner.go:195] Run: crio --version
	I0910 19:00:12.632619   71183 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0910 19:00:12.725298   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:15.223906   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:14.035868   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:16.534058   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:12.633800   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetIP
	I0910 19:00:12.637104   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637447   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:12.637476   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:12.637684   71183 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 19:00:12.641996   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:12.654577   71183 kubeadm.go:883] updating cluster {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 19:00:12.654684   71183 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 19:00:12.654737   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:12.694585   71183 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0910 19:00:12.694644   71183 ssh_runner.go:195] Run: which lz4
	I0910 19:00:12.699764   71183 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 19:00:12.705406   71183 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 19:00:12.705437   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0910 19:00:14.054131   71183 crio.go:462] duration metric: took 1.354391682s to copy over tarball
	I0910 19:00:14.054206   71183 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 19:00:16.114941   71183 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.06070257s)
	I0910 19:00:16.114968   71183 crio.go:469] duration metric: took 2.060808083s to extract the tarball
	I0910 19:00:16.114978   71183 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 19:00:16.153934   71183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0910 19:00:16.199988   71183 crio.go:514] all images are preloaded for cri-o runtime.
	I0910 19:00:16.200008   71183 cache_images.go:84] Images are preloaded, skipping loading
	I0910 19:00:16.200015   71183 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.31.0 crio true true} ...
	I0910 19:00:16.200109   71183 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-836868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 19:00:16.200168   71183 ssh_runner.go:195] Run: crio config
	I0910 19:00:16.249409   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:16.249430   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:16.249443   71183 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 19:00:16.249462   71183 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-836868 NodeName:embed-certs-836868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 19:00:16.249596   71183 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-836868"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 19:00:16.249652   71183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 19:00:16.265984   71183 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 19:00:16.266062   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 19:00:16.276007   71183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0910 19:00:16.291971   71183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 19:00:16.307712   71183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0910 19:00:16.323789   71183 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0910 19:00:16.327478   71183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 19:00:16.339545   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:16.470249   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:16.487798   71183 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868 for IP: 192.168.39.107
	I0910 19:00:16.487838   71183 certs.go:194] generating shared ca certs ...
	I0910 19:00:16.487858   71183 certs.go:226] acquiring lock for ca certs: {Name:mk3f61979cd6c0fb13fdaf4e35ab8dc84995a5cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:16.488058   71183 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key
	I0910 19:00:16.488110   71183 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key
	I0910 19:00:16.488124   71183 certs.go:256] generating profile certs ...
	I0910 19:00:16.488243   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/client.key
	I0910 19:00:16.488307   71183 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key.04acd22a
	I0910 19:00:16.488355   71183 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key
	I0910 19:00:16.488507   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem (1338 bytes)
	W0910 19:00:16.488547   71183 certs.go:480] ignoring /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121_empty.pem, impossibly tiny 0 bytes
	I0910 19:00:16.488560   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I0910 19:00:16.488593   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/ca.pem (1082 bytes)
	I0910 19:00:16.488633   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/cert.pem (1123 bytes)
	I0910 19:00:16.488669   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/certs/key.pem (1675 bytes)
	I0910 19:00:16.488856   71183 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem (1708 bytes)
	I0910 19:00:16.489528   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 19:00:16.529980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 19:00:16.568653   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 19:00:16.593924   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 19:00:16.628058   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0910 19:00:16.669209   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 19:00:16.693274   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 19:00:16.716323   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/embed-certs-836868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 19:00:16.740155   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 19:00:16.763908   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/certs/13121.pem --> /usr/share/ca-certificates/13121.pem (1338 bytes)
	I0910 19:00:16.787980   71183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/ssl/certs/131212.pem --> /usr/share/ca-certificates/131212.pem (1708 bytes)
	I0910 19:00:16.811754   71183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 19:00:16.828151   71183 ssh_runner.go:195] Run: openssl version
	I0910 19:00:16.834095   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 19:00:16.845376   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850178   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.850230   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 19:00:16.856507   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 19:00:16.868105   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13121.pem && ln -fs /usr/share/ca-certificates/13121.pem /etc/ssl/certs/13121.pem"
	I0910 19:00:16.879950   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884778   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 10 17:46 /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.884823   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13121.pem
	I0910 19:00:16.890715   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13121.pem /etc/ssl/certs/51391683.0"
	I0910 19:00:16.903523   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131212.pem && ln -fs /usr/share/ca-certificates/131212.pem /etc/ssl/certs/131212.pem"
	I0910 19:00:16.914585   71183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919105   71183 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 10 17:46 /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.919151   71183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131212.pem
	I0910 19:00:16.924965   71183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131212.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 19:00:16.935579   71183 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 19:00:16.939895   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 19:00:16.945595   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 19:00:16.951247   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 19:00:16.956938   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 19:00:16.962908   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 19:00:16.968664   71183 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 19:00:16.974624   71183 kubeadm.go:392] StartCluster: {Name:embed-certs-836868 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-836868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 19:00:16.974725   71183 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0910 19:00:16.974778   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.012869   71183 cri.go:89] found id: ""
	I0910 19:00:17.012947   71183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 19:00:17.023781   71183 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0910 19:00:17.023798   71183 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0910 19:00:17.023846   71183 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 19:00:17.034549   71183 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 19:00:17.035566   71183 kubeconfig.go:125] found "embed-certs-836868" server: "https://192.168.39.107:8443"
	I0910 19:00:17.037751   71183 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 19:00:17.047667   71183 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.107
	I0910 19:00:17.047696   71183 kubeadm.go:1160] stopping kube-system containers ...
	I0910 19:00:17.047708   71183 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0910 19:00:17.047747   71183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0910 19:00:17.083130   71183 cri.go:89] found id: ""
	I0910 19:00:17.083200   71183 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 19:00:17.101035   71183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:00:17.111335   71183 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:00:17.111357   71183 kubeadm.go:157] found existing configuration files:
	
	I0910 19:00:17.111414   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:00:17.120543   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:00:17.120593   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:00:17.130938   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:00:17.140688   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:00:17.140747   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:00:17.150637   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.160483   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:00:17.160520   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:00:17.170417   71183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:00:17.179778   71183 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:00:17.179827   71183 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:00:17.189197   71183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:00:17.199264   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:12.841060   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.340347   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:13.841136   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:14.840913   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.341205   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:15.840692   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.340839   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:16.841050   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.341340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:17.224985   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:19.231248   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:18.534658   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:20.534807   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:17.309791   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.257162   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.482216   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.555094   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:18.645089   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:00:18.645178   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.146266   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.645546   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.146275   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.645291   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.662158   71183 api_server.go:72] duration metric: took 2.017082575s to wait for apiserver process to appear ...
	I0910 19:00:20.662183   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:00:20.662204   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:17.840510   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.340821   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:18.841156   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.340316   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:19.840339   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.341140   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:20.841333   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.340342   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:21.840282   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:22.340361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.326005   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.326036   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.326048   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.346004   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 19:00:23.346035   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 19:00:23.662353   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:23.669314   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:23.669344   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.162975   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.170262   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0910 19:00:24.170298   71183 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0910 19:00:24.662865   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:00:24.667320   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:00:24.674393   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:00:24.674418   71183 api_server.go:131] duration metric: took 4.01222766s to wait for apiserver health ...
	I0910 19:00:24.674427   71183 cni.go:84] Creating CNI manager for ""
	I0910 19:00:24.674433   71183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:00:24.676229   71183 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:00:24.677519   71183 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:00:24.692951   71183 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:00:24.718355   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:00:24.732731   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:00:24.732758   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 19:00:24.732764   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 19:00:24.732775   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 19:00:24.732781   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 19:00:24.732798   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 19:00:24.732808   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 19:00:24.732817   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:00:24.732823   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 19:00:24.732835   71183 system_pods.go:74] duration metric: took 14.459216ms to wait for pod list to return data ...
	I0910 19:00:24.732846   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:00:24.742472   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:00:24.742497   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:00:24.742507   71183 node_conditions.go:105] duration metric: took 9.657853ms to run NodePressure ...
	I0910 19:00:24.742523   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 19:00:25.021719   71183 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026163   71183 kubeadm.go:739] kubelet initialised
	I0910 19:00:25.026187   71183 kubeadm.go:740] duration metric: took 4.442058ms waiting for restarted kubelet to initialise ...
	I0910 19:00:25.026196   71183 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:25.030895   71183 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.035021   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035044   71183 pod_ready.go:82] duration metric: took 4.12756ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.035055   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.035064   71183 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.039362   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039381   71183 pod_ready.go:82] duration metric: took 4.309293ms for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.039389   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "etcd-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.039394   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.049142   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049164   71183 pod_ready.go:82] duration metric: took 9.762471ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.049175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.049182   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.122255   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122285   71183 pod_ready.go:82] duration metric: took 73.09407ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.122295   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.122301   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.522122   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522160   71183 pod_ready.go:82] duration metric: took 399.850787ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.522175   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-proxy-4fddv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.522185   71183 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:25.921918   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921947   71183 pod_ready.go:82] duration metric: took 399.75274ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:25.921956   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:25.921962   71183 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:26.322195   71183 pod_ready.go:98] node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322219   71183 pod_ready.go:82] duration metric: took 400.248825ms for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:00:26.322228   71183 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-836868" hosting pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:26.322235   71183 pod_ready.go:39] duration metric: took 1.296028669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:26.322251   71183 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:00:26.333796   71183 ops.go:34] apiserver oom_adj: -16
	I0910 19:00:26.333824   71183 kubeadm.go:597] duration metric: took 9.310018521s to restartPrimaryControlPlane
	I0910 19:00:26.333834   71183 kubeadm.go:394] duration metric: took 9.359219145s to StartCluster
	I0910 19:00:26.333850   71183 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.333920   71183 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:00:26.336496   71183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:00:26.336792   71183 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:00:26.336863   71183 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:00:26.336935   71183 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-836868"
	I0910 19:00:26.336969   71183 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-836868"
	W0910 19:00:26.336980   71183 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:00:26.336995   71183 addons.go:69] Setting default-storageclass=true in profile "embed-certs-836868"
	I0910 19:00:26.337050   71183 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-836868"
	I0910 19:00:26.337058   71183 config.go:182] Loaded profile config "embed-certs-836868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:00:26.337050   71183 addons.go:69] Setting metrics-server=true in profile "embed-certs-836868"
	I0910 19:00:26.337011   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337146   71183 addons.go:234] Setting addon metrics-server=true in "embed-certs-836868"
	W0910 19:00:26.337165   71183 addons.go:243] addon metrics-server should already be in state true
	I0910 19:00:26.337234   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.337501   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337547   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337552   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337583   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.337638   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.337677   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.339741   71183 out.go:177] * Verifying Kubernetes components...
	I0910 19:00:26.341792   71183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:00:26.354154   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0910 19:00:26.354750   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.355345   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.355379   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.355756   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.356316   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0910 19:00:26.356389   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.356428   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.356508   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35641
	I0910 19:00:26.356810   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.356893   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.357384   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.357411   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361164   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.361278   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.361302   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.361363   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.361709   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.362446   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.362483   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.364762   71183 addons.go:234] Setting addon default-storageclass=true in "embed-certs-836868"
	W0910 19:00:26.364786   71183 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:00:26.364814   71183 host.go:66] Checking if "embed-certs-836868" exists ...
	I0910 19:00:26.365165   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.365230   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.379158   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0910 19:00:26.379696   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.380235   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.380266   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.380654   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.380865   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.382030   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0910 19:00:26.382358   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.382892   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.382912   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.382928   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.383271   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.383441   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.385129   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.385171   71183 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:00:26.385687   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0910 19:00:26.386001   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.386217   71183 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:00:21.723833   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.724422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.724456   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:23.034262   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:25.035125   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:26.386227   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:00:26.386289   71183 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:00:26.386309   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.386518   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.386533   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.386931   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.387566   71183 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.387651   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:00:26.387672   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.387618   71183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:00:26.387760   71183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:00:26.389782   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.389941   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.390190   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.390263   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.390558   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.390744   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.390921   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.391058   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391542   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.391585   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.391788   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.391941   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.392097   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.392256   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.404601   71183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0910 19:00:26.405167   71183 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:00:26.406097   71183 main.go:141] libmachine: Using API Version  1
	I0910 19:00:26.406655   71183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:00:26.407006   71183 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:00:26.407163   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetState
	I0910 19:00:26.409223   71183 main.go:141] libmachine: (embed-certs-836868) Calling .DriverName
	I0910 19:00:26.409437   71183 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.409454   71183 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:00:26.409470   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHHostname
	I0910 19:00:26.412388   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.412812   71183 main.go:141] libmachine: (embed-certs-836868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ef:f2", ip: ""} in network mk-embed-certs-836868: {Iface:virbr1 ExpiryTime:2024-09-10 20:00:03 +0000 UTC Type:0 Mac:52:54:00:24:ef:f2 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:embed-certs-836868 Clientid:01:52:54:00:24:ef:f2}
	I0910 19:00:26.412831   71183 main.go:141] libmachine: (embed-certs-836868) DBG | domain embed-certs-836868 has defined IP address 192.168.39.107 and MAC address 52:54:00:24:ef:f2 in network mk-embed-certs-836868
	I0910 19:00:26.413010   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHPort
	I0910 19:00:26.413177   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHKeyPath
	I0910 19:00:26.413333   71183 main.go:141] libmachine: (embed-certs-836868) Calling .GetSSHUsername
	I0910 19:00:26.413474   71183 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/embed-certs-836868/id_rsa Username:docker}
	I0910 19:00:26.533906   71183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:00:26.552203   71183 node_ready.go:35] waiting up to 6m0s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:26.687774   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:00:26.687804   71183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:00:26.690124   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:00:26.737647   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:00:26.737673   71183 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:00:26.739650   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:00:26.783096   71183 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:26.783125   71183 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:00:26.828766   71183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:00:22.841048   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.341180   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:23.841325   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.340485   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:24.841340   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.340935   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:25.840886   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.340826   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:26.840344   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.341189   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:27.844896   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154733205s)
	I0910 19:00:27.844931   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105250764s)
	I0910 19:00:27.844944   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844969   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.844979   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.844980   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845406   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845420   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845434   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845446   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.845464   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.845471   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.845702   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.845733   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.845747   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847084   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847101   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.847110   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.847118   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.847308   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.847323   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.852938   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.852956   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.853198   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.853219   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.853224   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.879527   71183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.05071539s)
	I0910 19:00:27.879577   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.879597   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880030   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880050   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880059   71183 main.go:141] libmachine: Making call to close driver server
	I0910 19:00:27.880081   71183 main.go:141] libmachine: (embed-certs-836868) Calling .Close
	I0910 19:00:27.880381   71183 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:00:27.880405   71183 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:00:27.880416   71183 addons.go:475] Verifying addon metrics-server=true in "embed-certs-836868"
	I0910 19:00:27.880383   71183 main.go:141] libmachine: (embed-certs-836868) DBG | Closing plugin on server side
	I0910 19:00:27.883034   71183 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:00:28.222881   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.223636   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:30.034633   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:27.884243   71183 addons.go:510] duration metric: took 1.547392632s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:00:28.556786   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:31.055519   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:27.840306   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.340657   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:28.841179   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.340881   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:29.840957   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.341260   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:30.841151   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:31.840360   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.341199   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:32.724435   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:35.223194   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.533611   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:34.534941   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.034007   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:33.056381   71183 node_ready.go:53] node "embed-certs-836868" has status "Ready":"False"
	I0910 19:00:34.056156   71183 node_ready.go:49] node "embed-certs-836868" has status "Ready":"True"
	I0910 19:00:34.056191   71183 node_ready.go:38] duration metric: took 7.503955102s for node "embed-certs-836868" to be "Ready" ...
	I0910 19:00:34.056200   71183 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:00:34.063331   71183 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068294   71183 pod_ready.go:93] pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:34.068322   71183 pod_ready.go:82] duration metric: took 4.96275ms for pod "coredns-6f6b679f8f-mt78p" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:34.068335   71183 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:36.077798   71183 pod_ready.go:103] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:32.841192   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.340518   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:33.840995   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.341016   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:34.840480   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.340647   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:35.840416   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.340921   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:36.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.340956   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:37.224065   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.723852   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:39.533725   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.534430   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.576189   71183 pod_ready.go:93] pod "etcd-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.576218   71183 pod_ready.go:82] duration metric: took 3.507872898s for pod "etcd-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.576238   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582150   71183 pod_ready.go:93] pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.582167   71183 pod_ready.go:82] duration metric: took 5.921544ms for pod "kube-apiserver-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.582175   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586941   71183 pod_ready.go:93] pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.586956   71183 pod_ready.go:82] duration metric: took 4.774648ms for pod "kube-controller-manager-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.586963   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591829   71183 pod_ready.go:93] pod "kube-proxy-4fddv" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.591846   71183 pod_ready.go:82] duration metric: took 4.876938ms for pod "kube-proxy-4fddv" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.591854   71183 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657930   71183 pod_ready.go:93] pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace has status "Ready":"True"
	I0910 19:00:37.657952   71183 pod_ready.go:82] duration metric: took 66.092785ms for pod "kube-scheduler-embed-certs-836868" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:37.657962   71183 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	I0910 19:00:39.665465   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:41.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:37.841210   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.341302   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:38.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.340558   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:39.840395   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.341022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:40.841093   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.341228   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:41.841103   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.340329   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:42.223446   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.223533   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.224840   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.033565   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.034402   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:44.164336   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:46.164983   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:42.841000   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.341147   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:43.840534   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.340988   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:44.840926   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.340859   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:45.840877   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.340930   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:46.841175   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:47.341064   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.722930   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.723539   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.036816   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:50.534367   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:48.667433   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:51.164114   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:47.841037   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.341204   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:48.840961   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.340679   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:49.841173   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.340751   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:50.841158   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.340999   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:51.840349   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.340383   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:52.723945   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.224168   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.034234   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.533690   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:53.164294   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:55.666369   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:52.840991   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.340439   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:53.840487   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.340407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:54.840557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.340603   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:55.840619   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.340844   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:56.841190   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.340927   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:57.724247   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.223715   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:58.033639   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.034297   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.670234   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:00.164278   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.164755   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:00:57.840798   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.340905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:58.841330   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.340743   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:00:59.840256   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.340970   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:00.840732   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:01.340927   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:01.341014   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:01.378922   72122 cri.go:89] found id: ""
	I0910 19:01:01.378953   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.378964   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:01.378971   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:01.379032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:01.413274   72122 cri.go:89] found id: ""
	I0910 19:01:01.413302   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.413313   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:01.413320   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:01.413383   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:01.449165   72122 cri.go:89] found id: ""
	I0910 19:01:01.449204   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.449215   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:01.449221   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:01.449291   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:01.484627   72122 cri.go:89] found id: ""
	I0910 19:01:01.484650   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.484657   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:01.484663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:01.484720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:01.519332   72122 cri.go:89] found id: ""
	I0910 19:01:01.519357   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.519364   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:01.519370   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:01.519424   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:01.554080   72122 cri.go:89] found id: ""
	I0910 19:01:01.554102   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.554109   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:01.554114   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:01.554160   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:01.590100   72122 cri.go:89] found id: ""
	I0910 19:01:01.590131   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.590143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:01.590149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:01.590208   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:01.623007   72122 cri.go:89] found id: ""
	I0910 19:01:01.623034   72122 logs.go:276] 0 containers: []
	W0910 19:01:01.623045   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:01.623055   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:01.623070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:01.679940   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:01.679971   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:01.694183   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:01.694218   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:01.826997   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:01.827025   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:01.827038   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:01.903885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:01.903926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:02.224039   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.224422   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:02.533395   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:05.034075   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.665680   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:06.665874   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:04.450792   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:04.471427   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:04.471501   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:04.521450   72122 cri.go:89] found id: ""
	I0910 19:01:04.521484   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.521494   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:04.521503   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:04.521562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:04.577588   72122 cri.go:89] found id: ""
	I0910 19:01:04.577622   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.577633   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:04.577641   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:04.577707   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:04.615558   72122 cri.go:89] found id: ""
	I0910 19:01:04.615586   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.615594   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:04.615599   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:04.615652   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:04.655763   72122 cri.go:89] found id: ""
	I0910 19:01:04.655793   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.655806   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:04.655815   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:04.655881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:04.692620   72122 cri.go:89] found id: ""
	I0910 19:01:04.692642   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.692649   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:04.692654   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:04.692709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:04.730575   72122 cri.go:89] found id: ""
	I0910 19:01:04.730601   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.730611   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:04.730616   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:04.730665   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:04.766716   72122 cri.go:89] found id: ""
	I0910 19:01:04.766742   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.766749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:04.766754   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:04.766799   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:04.808122   72122 cri.go:89] found id: ""
	I0910 19:01:04.808151   72122 logs.go:276] 0 containers: []
	W0910 19:01:04.808162   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:04.808173   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:04.808185   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:04.858563   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:04.858592   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:04.872323   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:04.872350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:04.942541   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:04.942571   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:04.942588   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:05.022303   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:05.022338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:06.723760   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:08.724550   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.223094   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.533060   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.534466   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:12.034244   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:09.163526   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:11.164502   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:07.562092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:07.575254   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:07.575308   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:07.616583   72122 cri.go:89] found id: ""
	I0910 19:01:07.616607   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.616615   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:07.616620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:07.616676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:07.654676   72122 cri.go:89] found id: ""
	I0910 19:01:07.654700   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.654711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:07.654718   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:07.654790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:07.690054   72122 cri.go:89] found id: ""
	I0910 19:01:07.690085   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.690096   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:07.690104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:07.690171   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:07.724273   72122 cri.go:89] found id: ""
	I0910 19:01:07.724295   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.724302   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:07.724307   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:07.724363   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:07.757621   72122 cri.go:89] found id: ""
	I0910 19:01:07.757646   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.757654   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:07.757660   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:07.757716   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:07.791502   72122 cri.go:89] found id: ""
	I0910 19:01:07.791533   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.791543   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:07.791557   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:07.791620   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:07.825542   72122 cri.go:89] found id: ""
	I0910 19:01:07.825577   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.825586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:07.825592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:07.825649   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:07.862278   72122 cri.go:89] found id: ""
	I0910 19:01:07.862303   72122 logs.go:276] 0 containers: []
	W0910 19:01:07.862312   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:07.862320   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:07.862331   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:07.952016   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:07.952059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:07.997004   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:07.997034   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:08.047745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:08.047783   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:08.064712   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:08.064736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:08.136822   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:10.637017   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:10.650113   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:10.650198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:10.687477   72122 cri.go:89] found id: ""
	I0910 19:01:10.687504   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.687513   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:10.687520   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:10.687594   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:10.721410   72122 cri.go:89] found id: ""
	I0910 19:01:10.721437   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.721447   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:10.721455   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:10.721514   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:10.757303   72122 cri.go:89] found id: ""
	I0910 19:01:10.757330   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.757338   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:10.757343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:10.757396   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:10.794761   72122 cri.go:89] found id: ""
	I0910 19:01:10.794788   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.794799   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:10.794806   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:10.794885   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:10.828631   72122 cri.go:89] found id: ""
	I0910 19:01:10.828657   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.828668   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:10.828675   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:10.828737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:10.863609   72122 cri.go:89] found id: ""
	I0910 19:01:10.863634   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.863641   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:10.863646   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:10.863734   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:10.899299   72122 cri.go:89] found id: ""
	I0910 19:01:10.899324   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.899335   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:10.899342   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:10.899403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:10.939233   72122 cri.go:89] found id: ""
	I0910 19:01:10.939259   72122 logs.go:276] 0 containers: []
	W0910 19:01:10.939268   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:10.939277   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:10.939290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:10.976599   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:10.976627   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:11.029099   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:11.029144   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:11.045401   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:11.045426   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:11.119658   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:11.119679   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:11.119696   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:13.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.723673   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:14.034325   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:16.534463   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.663847   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:15.664387   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:13.698696   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:13.712317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:13.712386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:13.747442   72122 cri.go:89] found id: ""
	I0910 19:01:13.747470   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.747480   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:13.747487   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:13.747555   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:13.782984   72122 cri.go:89] found id: ""
	I0910 19:01:13.783008   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.783015   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:13.783021   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:13.783078   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:13.820221   72122 cri.go:89] found id: ""
	I0910 19:01:13.820245   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.820256   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:13.820262   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:13.820322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:13.854021   72122 cri.go:89] found id: ""
	I0910 19:01:13.854056   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.854068   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:13.854075   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:13.854138   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:13.888292   72122 cri.go:89] found id: ""
	I0910 19:01:13.888321   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.888331   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:13.888338   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:13.888398   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:13.922301   72122 cri.go:89] found id: ""
	I0910 19:01:13.922330   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.922341   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:13.922349   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:13.922408   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:13.959977   72122 cri.go:89] found id: ""
	I0910 19:01:13.960002   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.960010   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:13.960015   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:13.960074   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:13.995255   72122 cri.go:89] found id: ""
	I0910 19:01:13.995282   72122 logs.go:276] 0 containers: []
	W0910 19:01:13.995293   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:13.995308   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:13.995323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:14.050760   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:14.050790   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:14.064694   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:14.064723   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:14.137406   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:14.137431   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:14.137447   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:14.216624   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:14.216657   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:16.765643   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:16.778746   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:16.778821   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:16.814967   72122 cri.go:89] found id: ""
	I0910 19:01:16.814999   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.815010   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:16.815017   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:16.815073   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:16.850306   72122 cri.go:89] found id: ""
	I0910 19:01:16.850334   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.850345   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:16.850352   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:16.850413   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:16.886104   72122 cri.go:89] found id: ""
	I0910 19:01:16.886134   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.886144   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:16.886152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:16.886218   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:16.921940   72122 cri.go:89] found id: ""
	I0910 19:01:16.921968   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.921977   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:16.921983   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:16.922032   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:16.956132   72122 cri.go:89] found id: ""
	I0910 19:01:16.956166   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.956177   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:16.956185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:16.956247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:16.988240   72122 cri.go:89] found id: ""
	I0910 19:01:16.988269   72122 logs.go:276] 0 containers: []
	W0910 19:01:16.988278   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:16.988284   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:16.988330   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:17.022252   72122 cri.go:89] found id: ""
	I0910 19:01:17.022281   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.022291   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:17.022297   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:17.022364   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:17.058664   72122 cri.go:89] found id: ""
	I0910 19:01:17.058693   72122 logs.go:276] 0 containers: []
	W0910 19:01:17.058703   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:17.058715   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:17.058740   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:17.136927   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:17.136964   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:17.189427   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:17.189457   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:17.242193   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:17.242225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:17.257878   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:17.257908   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:17.330096   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:17.724465   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.224230   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:18.534806   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:21.034368   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:17.667897   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:20.165174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.165421   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:19.831030   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:19.844516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:19.844581   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:19.879878   72122 cri.go:89] found id: ""
	I0910 19:01:19.879908   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.879919   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:19.879927   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:19.879988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:19.915992   72122 cri.go:89] found id: ""
	I0910 19:01:19.916018   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.916025   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:19.916030   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:19.916084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:19.949206   72122 cri.go:89] found id: ""
	I0910 19:01:19.949232   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.949242   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:19.949249   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:19.949311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:19.983011   72122 cri.go:89] found id: ""
	I0910 19:01:19.983035   72122 logs.go:276] 0 containers: []
	W0910 19:01:19.983043   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:19.983048   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:19.983096   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:20.018372   72122 cri.go:89] found id: ""
	I0910 19:01:20.018394   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.018402   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:20.018408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:20.018466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:20.053941   72122 cri.go:89] found id: ""
	I0910 19:01:20.053967   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.053975   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:20.053980   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:20.054037   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:20.084999   72122 cri.go:89] found id: ""
	I0910 19:01:20.085026   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.085035   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:20.085042   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:20.085115   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:20.124036   72122 cri.go:89] found id: ""
	I0910 19:01:20.124063   72122 logs.go:276] 0 containers: []
	W0910 19:01:20.124072   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:20.124086   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:20.124103   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:20.176917   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:20.176944   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:20.190831   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:20.190852   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:20.257921   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:20.257942   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:20.257954   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:20.335320   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:20.335350   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:22.723788   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.223765   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:23.034456   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:25.534821   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:24.663208   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:26.664282   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:22.875167   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:22.888803   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:22.888858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:22.922224   72122 cri.go:89] found id: ""
	I0910 19:01:22.922252   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.922264   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:22.922270   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:22.922328   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:22.959502   72122 cri.go:89] found id: ""
	I0910 19:01:22.959536   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.959546   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:22.959553   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:22.959619   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:22.992914   72122 cri.go:89] found id: ""
	I0910 19:01:22.992944   72122 logs.go:276] 0 containers: []
	W0910 19:01:22.992955   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:22.992962   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:22.993022   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:23.028342   72122 cri.go:89] found id: ""
	I0910 19:01:23.028367   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.028376   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:23.028384   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:23.028443   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:23.064715   72122 cri.go:89] found id: ""
	I0910 19:01:23.064742   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.064753   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:23.064761   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:23.064819   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:23.100752   72122 cri.go:89] found id: ""
	I0910 19:01:23.100781   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.100789   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:23.100795   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:23.100857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:23.136017   72122 cri.go:89] found id: ""
	I0910 19:01:23.136045   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.136055   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:23.136062   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:23.136108   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:23.170787   72122 cri.go:89] found id: ""
	I0910 19:01:23.170811   72122 logs.go:276] 0 containers: []
	W0910 19:01:23.170819   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:23.170826   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:23.170840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:23.210031   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:23.210059   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:23.261525   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:23.261557   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:23.275611   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:23.275636   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:23.348543   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:23.348568   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:23.348582   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:25.929406   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:25.942658   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:25.942737   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:25.977231   72122 cri.go:89] found id: ""
	I0910 19:01:25.977260   72122 logs.go:276] 0 containers: []
	W0910 19:01:25.977270   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:25.977277   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:25.977336   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:26.015060   72122 cri.go:89] found id: ""
	I0910 19:01:26.015093   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.015103   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:26.015110   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:26.015180   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:26.053618   72122 cri.go:89] found id: ""
	I0910 19:01:26.053643   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.053651   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:26.053656   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:26.053712   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:26.090489   72122 cri.go:89] found id: ""
	I0910 19:01:26.090515   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.090523   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:26.090529   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:26.090587   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:26.126687   72122 cri.go:89] found id: ""
	I0910 19:01:26.126710   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.126718   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:26.126723   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:26.126771   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:26.160901   72122 cri.go:89] found id: ""
	I0910 19:01:26.160939   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.160951   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:26.160959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:26.161017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:26.195703   72122 cri.go:89] found id: ""
	I0910 19:01:26.195728   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.195737   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:26.195743   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:26.195794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:26.230394   72122 cri.go:89] found id: ""
	I0910 19:01:26.230414   72122 logs.go:276] 0 containers: []
	W0910 19:01:26.230422   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:26.230430   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:26.230444   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:26.296884   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:26.296905   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:26.296926   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:26.371536   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:26.371569   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:26.412926   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:26.412958   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:26.462521   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:26.462550   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:27.725957   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.224312   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.034338   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:30.034794   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.035284   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.668205   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:31.166271   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:28.976550   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:28.989517   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:28.989586   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:29.025638   72122 cri.go:89] found id: ""
	I0910 19:01:29.025662   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.025671   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:29.025677   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:29.025719   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:29.067473   72122 cri.go:89] found id: ""
	I0910 19:01:29.067495   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.067502   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:29.067507   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:29.067556   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:29.105587   72122 cri.go:89] found id: ""
	I0910 19:01:29.105616   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.105628   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:29.105635   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:29.105696   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:29.142427   72122 cri.go:89] found id: ""
	I0910 19:01:29.142458   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.142468   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:29.142474   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:29.142530   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:29.178553   72122 cri.go:89] found id: ""
	I0910 19:01:29.178575   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.178582   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:29.178587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:29.178638   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:29.212997   72122 cri.go:89] found id: ""
	I0910 19:01:29.213025   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.213034   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:29.213040   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:29.213109   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:29.247057   72122 cri.go:89] found id: ""
	I0910 19:01:29.247083   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.247091   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:29.247097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:29.247151   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:29.285042   72122 cri.go:89] found id: ""
	I0910 19:01:29.285084   72122 logs.go:276] 0 containers: []
	W0910 19:01:29.285096   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:29.285107   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:29.285131   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:29.336003   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:29.336033   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:29.349867   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:29.349890   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:29.422006   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:29.422028   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:29.422043   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:29.504047   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:29.504079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:32.050723   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:32.063851   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:32.063904   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:32.100816   72122 cri.go:89] found id: ""
	I0910 19:01:32.100841   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.100851   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:32.100858   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:32.100924   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:32.134863   72122 cri.go:89] found id: ""
	I0910 19:01:32.134892   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.134902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:32.134909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:32.134967   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:32.169873   72122 cri.go:89] found id: ""
	I0910 19:01:32.169901   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.169912   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:32.169919   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:32.169973   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:32.202161   72122 cri.go:89] found id: ""
	I0910 19:01:32.202187   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.202197   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:32.202204   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:32.202264   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:32.236850   72122 cri.go:89] found id: ""
	I0910 19:01:32.236879   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.236888   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:32.236896   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:32.236957   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:32.271479   72122 cri.go:89] found id: ""
	I0910 19:01:32.271511   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.271530   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:32.271542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:32.271701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:32.306724   72122 cri.go:89] found id: ""
	I0910 19:01:32.306747   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.306754   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:32.306760   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:32.306811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:32.341153   72122 cri.go:89] found id: ""
	I0910 19:01:32.341184   72122 logs.go:276] 0 containers: []
	W0910 19:01:32.341195   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:32.341206   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:32.341221   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:32.393087   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:32.393122   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:32.406565   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:32.406591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:32.478030   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:32.478048   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:32.478079   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:32.224371   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.723372   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:34.533510   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:37.033933   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:33.671725   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:36.165396   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:32.568440   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:32.568478   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:35.112022   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:35.125210   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:35.125286   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:35.160716   72122 cri.go:89] found id: ""
	I0910 19:01:35.160743   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.160753   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:35.160759   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:35.160817   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:35.196500   72122 cri.go:89] found id: ""
	I0910 19:01:35.196530   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.196541   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:35.196548   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:35.196622   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:35.232476   72122 cri.go:89] found id: ""
	I0910 19:01:35.232510   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.232521   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:35.232528   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:35.232590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:35.269612   72122 cri.go:89] found id: ""
	I0910 19:01:35.269635   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.269644   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:35.269649   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:35.269697   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:35.307368   72122 cri.go:89] found id: ""
	I0910 19:01:35.307393   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.307401   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:35.307408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:35.307475   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:35.342079   72122 cri.go:89] found id: ""
	I0910 19:01:35.342108   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.342119   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:35.342126   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:35.342188   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:35.379732   72122 cri.go:89] found id: ""
	I0910 19:01:35.379761   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.379771   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:35.379778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:35.379840   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:35.419067   72122 cri.go:89] found id: ""
	I0910 19:01:35.419098   72122 logs.go:276] 0 containers: []
	W0910 19:01:35.419109   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:35.419120   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:35.419139   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:35.472459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:35.472494   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:35.487044   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:35.487078   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:35.565242   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:35.565264   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:35.565282   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:35.645918   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:35.645951   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:36.724330   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.724368   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.224272   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:39.533968   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.534579   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.666059   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:41.164158   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:38.189238   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:38.203973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:38.204035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:38.241402   72122 cri.go:89] found id: ""
	I0910 19:01:38.241428   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.241438   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:38.241446   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:38.241506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:38.280657   72122 cri.go:89] found id: ""
	I0910 19:01:38.280685   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.280693   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:38.280698   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:38.280753   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:38.319697   72122 cri.go:89] found id: ""
	I0910 19:01:38.319725   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.319735   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:38.319742   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:38.319804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:38.356766   72122 cri.go:89] found id: ""
	I0910 19:01:38.356799   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.356810   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:38.356817   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:38.356876   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:38.395468   72122 cri.go:89] found id: ""
	I0910 19:01:38.395497   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.395508   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:38.395516   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:38.395577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:38.434942   72122 cri.go:89] found id: ""
	I0910 19:01:38.434965   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.434974   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:38.434979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:38.435025   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:38.470687   72122 cri.go:89] found id: ""
	I0910 19:01:38.470715   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.470724   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:38.470729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:38.470777   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:38.505363   72122 cri.go:89] found id: ""
	I0910 19:01:38.505394   72122 logs.go:276] 0 containers: []
	W0910 19:01:38.505405   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:38.505417   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:38.505432   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:38.557735   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:38.557770   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:38.586094   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:38.586128   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:38.665190   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:38.665215   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:38.665231   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:38.743748   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:38.743779   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:41.284310   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:41.299086   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:41.299157   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:41.340453   72122 cri.go:89] found id: ""
	I0910 19:01:41.340476   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.340484   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:41.340489   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:41.340544   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:41.374028   72122 cri.go:89] found id: ""
	I0910 19:01:41.374052   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.374060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:41.374066   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:41.374117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:41.413888   72122 cri.go:89] found id: ""
	I0910 19:01:41.413915   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.413929   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:41.413935   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:41.413994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:41.450846   72122 cri.go:89] found id: ""
	I0910 19:01:41.450873   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.450883   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:41.450890   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:41.450950   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:41.484080   72122 cri.go:89] found id: ""
	I0910 19:01:41.484107   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.484115   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:41.484120   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:41.484168   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:41.523652   72122 cri.go:89] found id: ""
	I0910 19:01:41.523677   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.523685   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:41.523690   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:41.523749   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:41.563690   72122 cri.go:89] found id: ""
	I0910 19:01:41.563715   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.563727   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:41.563734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:41.563797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:41.602101   72122 cri.go:89] found id: ""
	I0910 19:01:41.602122   72122 logs.go:276] 0 containers: []
	W0910 19:01:41.602130   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:41.602137   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:41.602152   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:41.655459   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:41.655488   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:41.670037   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:41.670062   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:41.741399   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:41.741417   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:41.741428   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:41.817411   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:41.817445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:43.726285   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.223867   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.034404   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.533246   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:43.666629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:46.164675   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:44.363631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:44.378279   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:44.378344   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:44.412450   72122 cri.go:89] found id: ""
	I0910 19:01:44.412486   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.412495   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:44.412502   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:44.412569   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:44.448378   72122 cri.go:89] found id: ""
	I0910 19:01:44.448407   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.448415   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:44.448420   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:44.448470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:44.483478   72122 cri.go:89] found id: ""
	I0910 19:01:44.483516   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.483524   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:44.483530   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:44.483584   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:44.517787   72122 cri.go:89] found id: ""
	I0910 19:01:44.517812   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.517822   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:44.517828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:44.517886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:44.554909   72122 cri.go:89] found id: ""
	I0910 19:01:44.554939   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.554950   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:44.554957   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:44.555018   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:44.589865   72122 cri.go:89] found id: ""
	I0910 19:01:44.589890   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.589909   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:44.589923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:44.589968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:44.626712   72122 cri.go:89] found id: ""
	I0910 19:01:44.626739   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.626749   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:44.626756   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:44.626815   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:44.664985   72122 cri.go:89] found id: ""
	I0910 19:01:44.665067   72122 logs.go:276] 0 containers: []
	W0910 19:01:44.665103   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:44.665114   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:44.665165   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:44.721160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:44.721196   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:44.735339   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:44.735366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:44.810056   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:44.810080   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:44.810094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:44.898822   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:44.898871   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:47.438440   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:47.451438   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:47.451506   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:48.723661   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.723768   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.534671   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:51.033397   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:48.164739   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:50.665165   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:47.491703   72122 cri.go:89] found id: ""
	I0910 19:01:47.491729   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.491740   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:47.491747   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:47.491811   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:47.526834   72122 cri.go:89] found id: ""
	I0910 19:01:47.526862   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.526874   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:47.526880   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:47.526940   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:47.570463   72122 cri.go:89] found id: ""
	I0910 19:01:47.570488   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.570496   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:47.570503   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:47.570545   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:47.608691   72122 cri.go:89] found id: ""
	I0910 19:01:47.608715   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.608727   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:47.608734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:47.608780   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:47.648279   72122 cri.go:89] found id: ""
	I0910 19:01:47.648308   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.648316   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:47.648324   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:47.648386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:47.684861   72122 cri.go:89] found id: ""
	I0910 19:01:47.684885   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.684892   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:47.684897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:47.684947   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:47.721004   72122 cri.go:89] found id: ""
	I0910 19:01:47.721037   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.721049   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:47.721056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:47.721134   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:47.756154   72122 cri.go:89] found id: ""
	I0910 19:01:47.756181   72122 logs.go:276] 0 containers: []
	W0910 19:01:47.756192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:47.756202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:47.756217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:47.806860   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:47.806889   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:47.822419   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:47.822445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:47.891966   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:47.891986   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:47.892000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:47.978510   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:47.978561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.519264   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:50.533576   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:50.533630   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:50.567574   72122 cri.go:89] found id: ""
	I0910 19:01:50.567601   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.567612   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:50.567619   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:50.567678   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:50.608824   72122 cri.go:89] found id: ""
	I0910 19:01:50.608850   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.608858   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:50.608863   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:50.608939   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:50.644502   72122 cri.go:89] found id: ""
	I0910 19:01:50.644530   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.644538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:50.644544   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:50.644590   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:50.682309   72122 cri.go:89] found id: ""
	I0910 19:01:50.682332   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.682340   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:50.682345   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:50.682404   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:50.735372   72122 cri.go:89] found id: ""
	I0910 19:01:50.735398   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.735410   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:50.735418   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:50.735482   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:50.786364   72122 cri.go:89] found id: ""
	I0910 19:01:50.786391   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.786401   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:50.786408   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:50.786464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:50.831525   72122 cri.go:89] found id: ""
	I0910 19:01:50.831564   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.831575   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:50.831582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:50.831645   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:50.873457   72122 cri.go:89] found id: ""
	I0910 19:01:50.873482   72122 logs.go:276] 0 containers: []
	W0910 19:01:50.873493   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:50.873503   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:50.873524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:50.956032   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:50.956069   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:50.996871   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:50.996904   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:51.047799   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:51.047824   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:51.061946   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:51.061970   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:51.136302   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:53.222492   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.223835   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.034478   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.532623   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:52.665991   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:55.164343   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:53.636464   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:53.649971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:53.650054   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:53.688172   72122 cri.go:89] found id: ""
	I0910 19:01:53.688201   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.688211   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:53.688217   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:53.688274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:53.725094   72122 cri.go:89] found id: ""
	I0910 19:01:53.725119   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.725128   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:53.725135   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:53.725196   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:53.763866   72122 cri.go:89] found id: ""
	I0910 19:01:53.763893   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.763907   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:53.763914   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:53.763971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:53.797760   72122 cri.go:89] found id: ""
	I0910 19:01:53.797787   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.797798   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:53.797805   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:53.797862   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:53.830305   72122 cri.go:89] found id: ""
	I0910 19:01:53.830332   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.830340   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:53.830346   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:53.830402   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:53.861970   72122 cri.go:89] found id: ""
	I0910 19:01:53.861995   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.862003   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:53.862009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:53.862059   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:53.896577   72122 cri.go:89] found id: ""
	I0910 19:01:53.896600   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.896609   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:53.896614   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:53.896660   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:53.935051   72122 cri.go:89] found id: ""
	I0910 19:01:53.935077   72122 logs.go:276] 0 containers: []
	W0910 19:01:53.935086   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:53.935094   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:53.935105   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:53.950252   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:53.950276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:54.023327   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:54.023346   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:54.023361   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:54.101605   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:54.101643   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:54.142906   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:54.142930   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:56.697701   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:56.717755   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:56.717836   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:56.763564   72122 cri.go:89] found id: ""
	I0910 19:01:56.763594   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.763606   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:56.763613   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:56.763675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:56.815780   72122 cri.go:89] found id: ""
	I0910 19:01:56.815808   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.815816   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:56.815821   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:56.815883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:56.848983   72122 cri.go:89] found id: ""
	I0910 19:01:56.849013   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.849024   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:56.849032   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:56.849100   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:56.880660   72122 cri.go:89] found id: ""
	I0910 19:01:56.880690   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.880702   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:56.880709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:56.880756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:56.922836   72122 cri.go:89] found id: ""
	I0910 19:01:56.922860   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.922867   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:56.922873   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:56.922938   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:01:56.963474   72122 cri.go:89] found id: ""
	I0910 19:01:56.963505   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.963517   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:01:56.963524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:01:56.963585   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:01:56.996837   72122 cri.go:89] found id: ""
	I0910 19:01:56.996864   72122 logs.go:276] 0 containers: []
	W0910 19:01:56.996872   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:01:56.996877   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:01:56.996925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:01:57.029594   72122 cri.go:89] found id: ""
	I0910 19:01:57.029629   72122 logs.go:276] 0 containers: []
	W0910 19:01:57.029640   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:01:57.029651   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:01:57.029664   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:01:57.083745   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:01:57.083772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:01:57.099269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:01:57.099293   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:01:57.174098   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:01:57.174118   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:01:57.174129   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:01:57.258833   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:01:57.258869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:01:57.224384   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.722547   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.533178   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.533798   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.035089   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:57.665383   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:00.164920   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:01:59.800644   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:01:59.814728   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:01:59.814805   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:01:59.854081   72122 cri.go:89] found id: ""
	I0910 19:01:59.854113   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.854124   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:01:59.854133   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:01:59.854197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:01:59.889524   72122 cri.go:89] found id: ""
	I0910 19:01:59.889550   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.889560   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:01:59.889567   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:01:59.889626   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:01:59.925833   72122 cri.go:89] found id: ""
	I0910 19:01:59.925859   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.925866   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:01:59.925872   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:01:59.925935   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:01:59.962538   72122 cri.go:89] found id: ""
	I0910 19:01:59.962575   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.962586   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:01:59.962593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:01:59.962650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:01:59.996994   72122 cri.go:89] found id: ""
	I0910 19:01:59.997025   72122 logs.go:276] 0 containers: []
	W0910 19:01:59.997037   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:01:59.997045   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:01:59.997126   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:00.032881   72122 cri.go:89] found id: ""
	I0910 19:02:00.032905   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.032915   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:00.032923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:00.032988   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:00.065838   72122 cri.go:89] found id: ""
	I0910 19:02:00.065861   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.065869   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:00.065874   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:00.065927   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:00.099479   72122 cri.go:89] found id: ""
	I0910 19:02:00.099505   72122 logs.go:276] 0 containers: []
	W0910 19:02:00.099516   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:00.099526   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:00.099540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:00.182661   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:00.182689   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:00.223514   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:00.223553   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:00.273695   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:00.273721   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:00.287207   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:00.287233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:00.353975   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:01.724647   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.224071   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.225475   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.534230   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:06.534474   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.665228   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:04.667935   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:07.163506   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:02.854145   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:02.867413   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:02.867484   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:02.904299   72122 cri.go:89] found id: ""
	I0910 19:02:02.904327   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.904335   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:02.904340   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:02.904392   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:02.940981   72122 cri.go:89] found id: ""
	I0910 19:02:02.941010   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.941019   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:02.941024   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:02.941099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:02.980013   72122 cri.go:89] found id: ""
	I0910 19:02:02.980038   72122 logs.go:276] 0 containers: []
	W0910 19:02:02.980046   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:02.980052   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:02.980111   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:03.020041   72122 cri.go:89] found id: ""
	I0910 19:02:03.020071   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.020080   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:03.020087   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:03.020144   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:03.055228   72122 cri.go:89] found id: ""
	I0910 19:02:03.055264   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.055277   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:03.055285   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:03.055347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:03.088696   72122 cri.go:89] found id: ""
	I0910 19:02:03.088722   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.088730   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:03.088736   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:03.088787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:03.124753   72122 cri.go:89] found id: ""
	I0910 19:02:03.124776   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.124785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:03.124792   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:03.124849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:03.157191   72122 cri.go:89] found id: ""
	I0910 19:02:03.157222   72122 logs.go:276] 0 containers: []
	W0910 19:02:03.157230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:03.157238   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:03.157248   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:03.239015   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:03.239044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:03.279323   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:03.279355   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:03.328034   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:03.328067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:03.341591   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:03.341620   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:03.411057   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:05.911503   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:05.924794   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:05.924868   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:05.958827   72122 cri.go:89] found id: ""
	I0910 19:02:05.958852   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.958859   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:05.958865   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:05.958920   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:05.992376   72122 cri.go:89] found id: ""
	I0910 19:02:05.992412   72122 logs.go:276] 0 containers: []
	W0910 19:02:05.992423   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:05.992429   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:05.992485   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:06.028058   72122 cri.go:89] found id: ""
	I0910 19:02:06.028088   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.028098   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:06.028107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:06.028162   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:06.066428   72122 cri.go:89] found id: ""
	I0910 19:02:06.066458   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.066470   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:06.066477   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:06.066533   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:06.102750   72122 cri.go:89] found id: ""
	I0910 19:02:06.102774   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.102782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:06.102787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:06.102841   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:06.137216   72122 cri.go:89] found id: ""
	I0910 19:02:06.137243   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.137254   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:06.137261   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:06.137323   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:06.175227   72122 cri.go:89] found id: ""
	I0910 19:02:06.175251   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.175259   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:06.175265   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:06.175311   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:06.210197   72122 cri.go:89] found id: ""
	I0910 19:02:06.210222   72122 logs.go:276] 0 containers: []
	W0910 19:02:06.210230   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:06.210238   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:06.210249   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:06.261317   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:06.261353   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:06.275196   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:06.275225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:06.354186   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:06.354205   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:06.354219   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:06.436726   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:06.436763   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:08.723505   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:10.724499   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.035939   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.534648   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:09.166629   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:11.666941   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:08.979157   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:08.992097   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:08.992156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:09.025260   72122 cri.go:89] found id: ""
	I0910 19:02:09.025282   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.025289   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:09.025295   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:09.025360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:09.059139   72122 cri.go:89] found id: ""
	I0910 19:02:09.059166   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.059177   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:09.059186   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:09.059240   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:09.092935   72122 cri.go:89] found id: ""
	I0910 19:02:09.092964   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.092973   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:09.092979   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:09.093027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:09.127273   72122 cri.go:89] found id: ""
	I0910 19:02:09.127299   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.127310   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:09.127317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:09.127367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:09.163353   72122 cri.go:89] found id: ""
	I0910 19:02:09.163380   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.163389   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:09.163396   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:09.163453   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:09.195371   72122 cri.go:89] found id: ""
	I0910 19:02:09.195396   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.195407   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:09.195414   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:09.195473   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:09.229338   72122 cri.go:89] found id: ""
	I0910 19:02:09.229361   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.229370   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:09.229376   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:09.229432   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:09.262822   72122 cri.go:89] found id: ""
	I0910 19:02:09.262847   72122 logs.go:276] 0 containers: []
	W0910 19:02:09.262857   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:09.262874   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:09.262891   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:09.330079   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:09.330103   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:09.330119   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:09.408969   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:09.409003   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:09.447666   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:09.447702   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:09.501111   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:09.501141   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.016407   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:12.030822   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:12.030905   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:12.069191   72122 cri.go:89] found id: ""
	I0910 19:02:12.069218   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.069229   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:12.069236   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:12.069306   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:12.103687   72122 cri.go:89] found id: ""
	I0910 19:02:12.103726   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.103737   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:12.103862   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:12.103937   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:12.142891   72122 cri.go:89] found id: ""
	I0910 19:02:12.142920   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.142932   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:12.142940   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:12.142998   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:12.178966   72122 cri.go:89] found id: ""
	I0910 19:02:12.178991   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.179002   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:12.179010   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:12.179069   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:12.216070   72122 cri.go:89] found id: ""
	I0910 19:02:12.216093   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.216104   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:12.216112   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:12.216161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:12.251447   72122 cri.go:89] found id: ""
	I0910 19:02:12.251479   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.251492   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:12.251500   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:12.251568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:12.284640   72122 cri.go:89] found id: ""
	I0910 19:02:12.284666   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.284677   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:12.284682   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:12.284743   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:12.319601   72122 cri.go:89] found id: ""
	I0910 19:02:12.319625   72122 logs.go:276] 0 containers: []
	W0910 19:02:12.319632   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:12.319639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:12.319650   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:12.372932   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:12.372965   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:12.387204   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:12.387228   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:12.459288   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:12.459308   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:12.459323   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:13.223679   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:15.224341   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:14.034036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.533341   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:13.667258   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:16.164610   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:12.549161   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:12.549198   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:15.092557   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:15.105391   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:15.105456   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:15.139486   72122 cri.go:89] found id: ""
	I0910 19:02:15.139515   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.139524   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:15.139530   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:15.139591   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:15.173604   72122 cri.go:89] found id: ""
	I0910 19:02:15.173630   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.173641   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:15.173648   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:15.173710   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:15.208464   72122 cri.go:89] found id: ""
	I0910 19:02:15.208492   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.208503   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:15.208510   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:15.208568   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:15.247536   72122 cri.go:89] found id: ""
	I0910 19:02:15.247567   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.247579   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:15.247586   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:15.247650   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:15.285734   72122 cri.go:89] found id: ""
	I0910 19:02:15.285764   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.285775   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:15.285782   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:15.285858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:15.320755   72122 cri.go:89] found id: ""
	I0910 19:02:15.320782   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.320792   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:15.320798   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:15.320849   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:15.357355   72122 cri.go:89] found id: ""
	I0910 19:02:15.357384   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.357395   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:15.357402   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:15.357463   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:15.392105   72122 cri.go:89] found id: ""
	I0910 19:02:15.392130   72122 logs.go:276] 0 containers: []
	W0910 19:02:15.392137   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:15.392149   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:15.392160   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:15.444433   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:15.444465   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:15.458759   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:15.458784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:15.523490   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:15.523507   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:15.523524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:15.607584   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:15.607616   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:17.224472   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:19.723953   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.534545   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.535036   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.667949   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:20.669762   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:18.146611   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:18.160311   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:18.160378   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:18.195072   72122 cri.go:89] found id: ""
	I0910 19:02:18.195099   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.195109   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:18.195127   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:18.195201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:18.230099   72122 cri.go:89] found id: ""
	I0910 19:02:18.230129   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.230138   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:18.230145   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:18.230201   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:18.268497   72122 cri.go:89] found id: ""
	I0910 19:02:18.268525   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.268534   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:18.268539   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:18.268599   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:18.304929   72122 cri.go:89] found id: ""
	I0910 19:02:18.304966   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.304978   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:18.304985   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:18.305048   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:18.339805   72122 cri.go:89] found id: ""
	I0910 19:02:18.339839   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.339861   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:18.339868   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:18.339925   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:18.378353   72122 cri.go:89] found id: ""
	I0910 19:02:18.378372   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.378381   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:18.378393   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:18.378438   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:18.415175   72122 cri.go:89] found id: ""
	I0910 19:02:18.415195   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.415203   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:18.415208   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:18.415262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:18.450738   72122 cri.go:89] found id: ""
	I0910 19:02:18.450762   72122 logs.go:276] 0 containers: []
	W0910 19:02:18.450769   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:18.450778   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:18.450793   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:18.530943   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:18.530975   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:18.568983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:18.569021   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:18.622301   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:18.622336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:18.635788   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:18.635815   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:18.715729   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.216082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:21.229419   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:21.229488   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:21.265152   72122 cri.go:89] found id: ""
	I0910 19:02:21.265183   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.265193   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:21.265201   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:21.265262   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:21.300766   72122 cri.go:89] found id: ""
	I0910 19:02:21.300797   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.300815   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:21.300823   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:21.300883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:21.333416   72122 cri.go:89] found id: ""
	I0910 19:02:21.333443   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.333452   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:21.333460   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:21.333526   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:21.371112   72122 cri.go:89] found id: ""
	I0910 19:02:21.371142   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.371150   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:21.371156   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:21.371214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:21.405657   72122 cri.go:89] found id: ""
	I0910 19:02:21.405684   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.405695   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:21.405703   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:21.405755   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:21.440354   72122 cri.go:89] found id: ""
	I0910 19:02:21.440381   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.440392   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:21.440400   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:21.440464   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:21.480165   72122 cri.go:89] found id: ""
	I0910 19:02:21.480189   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.480199   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:21.480206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:21.480273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:21.518422   72122 cri.go:89] found id: ""
	I0910 19:02:21.518449   72122 logs.go:276] 0 containers: []
	W0910 19:02:21.518459   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:21.518470   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:21.518486   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:21.572263   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:21.572300   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:21.588179   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:21.588204   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:21.658330   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:21.658356   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:21.658371   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:21.743026   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:21.743063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:21.724730   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.724844   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:26.225026   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.034593   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.037588   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:23.164712   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:25.664475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:24.286604   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:24.299783   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:24.299847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:24.336998   72122 cri.go:89] found id: ""
	I0910 19:02:24.337031   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.337042   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:24.337050   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:24.337123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:24.374198   72122 cri.go:89] found id: ""
	I0910 19:02:24.374223   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.374231   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:24.374236   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:24.374289   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:24.407783   72122 cri.go:89] found id: ""
	I0910 19:02:24.407812   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.407822   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:24.407828   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:24.407881   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:24.443285   72122 cri.go:89] found id: ""
	I0910 19:02:24.443307   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.443315   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:24.443321   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:24.443367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:24.477176   72122 cri.go:89] found id: ""
	I0910 19:02:24.477198   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.477206   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:24.477212   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:24.477266   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:24.509762   72122 cri.go:89] found id: ""
	I0910 19:02:24.509783   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.509791   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:24.509797   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:24.509858   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:24.548746   72122 cri.go:89] found id: ""
	I0910 19:02:24.548775   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.548785   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:24.548793   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:24.548851   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:24.583265   72122 cri.go:89] found id: ""
	I0910 19:02:24.583297   72122 logs.go:276] 0 containers: []
	W0910 19:02:24.583313   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:24.583324   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:24.583338   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:24.634966   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:24.635001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:24.649844   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:24.649869   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:24.721795   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:24.721824   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:24.721840   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:24.807559   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:24.807593   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.352779   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:27.366423   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:27.366495   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:27.399555   72122 cri.go:89] found id: ""
	I0910 19:02:27.399582   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.399591   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:27.399596   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:27.399662   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:27.434151   72122 cri.go:89] found id: ""
	I0910 19:02:27.434179   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.434188   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:27.434194   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:27.434265   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:27.467053   72122 cri.go:89] found id: ""
	I0910 19:02:27.467081   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.467092   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:27.467099   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:27.467156   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:28.724149   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:31.224185   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.533697   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:29.533815   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.034343   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.667816   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:30.164174   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:27.500999   72122 cri.go:89] found id: ""
	I0910 19:02:27.501030   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.501039   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:27.501044   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:27.501114   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:27.537981   72122 cri.go:89] found id: ""
	I0910 19:02:27.538000   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.538007   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:27.538012   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:27.538060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:27.568622   72122 cri.go:89] found id: ""
	I0910 19:02:27.568649   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.568660   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:27.568668   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:27.568724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:27.603035   72122 cri.go:89] found id: ""
	I0910 19:02:27.603058   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.603067   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:27.603072   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:27.603131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:27.637624   72122 cri.go:89] found id: ""
	I0910 19:02:27.637651   72122 logs.go:276] 0 containers: []
	W0910 19:02:27.637662   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:27.637673   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:27.637693   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:27.651893   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:27.651915   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:27.723949   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:27.723969   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:27.723983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:27.801463   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:27.801496   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:27.841969   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:27.842000   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.398857   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:30.412720   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:30.412790   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:30.448125   72122 cri.go:89] found id: ""
	I0910 19:02:30.448152   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.448163   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:30.448171   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:30.448234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:30.481988   72122 cri.go:89] found id: ""
	I0910 19:02:30.482016   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.482027   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:30.482035   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:30.482083   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:30.516548   72122 cri.go:89] found id: ""
	I0910 19:02:30.516576   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.516583   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:30.516589   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:30.516646   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:30.566884   72122 cri.go:89] found id: ""
	I0910 19:02:30.566910   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.566918   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:30.566923   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:30.566975   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:30.602278   72122 cri.go:89] found id: ""
	I0910 19:02:30.602306   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.602314   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:30.602319   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:30.602379   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:30.636708   72122 cri.go:89] found id: ""
	I0910 19:02:30.636732   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.636740   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:30.636745   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:30.636797   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:30.681255   72122 cri.go:89] found id: ""
	I0910 19:02:30.681280   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.681295   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:30.681303   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:30.681361   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:30.715516   72122 cri.go:89] found id: ""
	I0910 19:02:30.715543   72122 logs.go:276] 0 containers: []
	W0910 19:02:30.715551   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:30.715560   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:30.715572   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:30.768916   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:30.768948   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:30.783318   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:30.783348   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:30.852901   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:30.852925   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:30.852940   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:30.932276   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:30.932314   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.725716   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.223970   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:34.533148   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:36.533854   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:32.667516   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:35.164375   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:33.471931   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:33.486152   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:33.486211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:33.524130   72122 cri.go:89] found id: ""
	I0910 19:02:33.524161   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.524173   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:33.524180   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:33.524238   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:33.562216   72122 cri.go:89] found id: ""
	I0910 19:02:33.562238   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.562247   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:33.562252   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:33.562305   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:33.596587   72122 cri.go:89] found id: ""
	I0910 19:02:33.596615   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.596626   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:33.596634   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:33.596692   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:33.633307   72122 cri.go:89] found id: ""
	I0910 19:02:33.633330   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.633338   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:33.633343   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:33.633403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:33.667780   72122 cri.go:89] found id: ""
	I0910 19:02:33.667805   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.667815   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:33.667820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:33.667878   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:33.702406   72122 cri.go:89] found id: ""
	I0910 19:02:33.702436   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.702447   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:33.702456   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:33.702524   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:33.744544   72122 cri.go:89] found id: ""
	I0910 19:02:33.744574   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.744581   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:33.744587   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:33.744661   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:33.782000   72122 cri.go:89] found id: ""
	I0910 19:02:33.782024   72122 logs.go:276] 0 containers: []
	W0910 19:02:33.782032   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:33.782040   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:33.782053   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:33.858087   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:33.858115   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:33.858133   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:33.943238   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:33.943278   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:33.987776   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:33.987804   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:34.043197   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:34.043232   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.558122   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:36.571125   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:36.571195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:36.606195   72122 cri.go:89] found id: ""
	I0910 19:02:36.606228   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.606239   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:36.606246   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:36.606304   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:36.640248   72122 cri.go:89] found id: ""
	I0910 19:02:36.640290   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.640302   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:36.640310   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:36.640360   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:36.676916   72122 cri.go:89] found id: ""
	I0910 19:02:36.676942   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.676952   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:36.676958   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:36.677013   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:36.713183   72122 cri.go:89] found id: ""
	I0910 19:02:36.713207   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.713218   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:36.713225   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:36.713283   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:36.750748   72122 cri.go:89] found id: ""
	I0910 19:02:36.750775   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.750782   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:36.750787   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:36.750847   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:36.782614   72122 cri.go:89] found id: ""
	I0910 19:02:36.782636   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.782644   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:36.782650   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:36.782709   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:36.822051   72122 cri.go:89] found id: ""
	I0910 19:02:36.822083   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.822094   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:36.822102   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:36.822161   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:36.856068   72122 cri.go:89] found id: ""
	I0910 19:02:36.856096   72122 logs.go:276] 0 containers: []
	W0910 19:02:36.856106   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:36.856117   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:36.856132   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:36.909586   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:36.909625   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:36.931649   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:36.931676   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:37.040146   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:37.040175   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:37.040194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:37.121902   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:37.121942   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:38.723762   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:40.723880   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:38.534001   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:41.035356   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:37.665212   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.668115   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.164118   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:39.665474   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:39.678573   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:39.678633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:39.712755   72122 cri.go:89] found id: ""
	I0910 19:02:39.712783   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.712793   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:39.712800   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:39.712857   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:39.744709   72122 cri.go:89] found id: ""
	I0910 19:02:39.744738   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.744748   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:39.744756   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:39.744809   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:39.780161   72122 cri.go:89] found id: ""
	I0910 19:02:39.780189   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.780200   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:39.780207   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:39.780255   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:39.817665   72122 cri.go:89] found id: ""
	I0910 19:02:39.817695   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.817704   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:39.817709   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:39.817757   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:39.857255   72122 cri.go:89] found id: ""
	I0910 19:02:39.857291   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.857299   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:39.857306   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:39.857381   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:39.893514   72122 cri.go:89] found id: ""
	I0910 19:02:39.893540   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.893550   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:39.893558   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:39.893614   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:39.932720   72122 cri.go:89] found id: ""
	I0910 19:02:39.932753   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.932767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:39.932775   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:39.932835   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:39.977063   72122 cri.go:89] found id: ""
	I0910 19:02:39.977121   72122 logs.go:276] 0 containers: []
	W0910 19:02:39.977135   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:39.977146   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:39.977168   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:39.991414   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:39.991445   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:40.066892   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:40.066910   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:40.066922   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:40.150648   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:40.150680   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:40.198519   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:40.198561   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:42.724332   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.223804   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:43.533841   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:45.534665   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:44.164851   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:46.165259   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:42.749906   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:42.769633   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:42.769703   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:42.812576   72122 cri.go:89] found id: ""
	I0910 19:02:42.812603   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.812613   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:42.812620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:42.812682   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:42.846233   72122 cri.go:89] found id: ""
	I0910 19:02:42.846257   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.846266   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:42.846271   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:42.846326   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:42.883564   72122 cri.go:89] found id: ""
	I0910 19:02:42.883593   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.883605   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:42.883612   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:42.883669   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:42.920774   72122 cri.go:89] found id: ""
	I0910 19:02:42.920801   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.920813   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:42.920820   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:42.920883   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:42.953776   72122 cri.go:89] found id: ""
	I0910 19:02:42.953808   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.953820   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:42.953829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:42.953887   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:42.989770   72122 cri.go:89] found id: ""
	I0910 19:02:42.989806   72122 logs.go:276] 0 containers: []
	W0910 19:02:42.989820   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:42.989829   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:42.989893   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:43.022542   72122 cri.go:89] found id: ""
	I0910 19:02:43.022567   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.022574   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:43.022580   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:43.022629   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:43.064308   72122 cri.go:89] found id: ""
	I0910 19:02:43.064329   72122 logs.go:276] 0 containers: []
	W0910 19:02:43.064337   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:43.064344   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:43.064356   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:43.120212   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:43.120243   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:43.134269   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:43.134296   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:43.218840   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:43.218865   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:43.218880   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:43.302560   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:43.302591   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:45.842788   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:45.857495   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:45.857557   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:45.892745   72122 cri.go:89] found id: ""
	I0910 19:02:45.892772   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.892782   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:45.892790   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:45.892888   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:45.928451   72122 cri.go:89] found id: ""
	I0910 19:02:45.928476   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.928486   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:45.928493   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:45.928551   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:45.962868   72122 cri.go:89] found id: ""
	I0910 19:02:45.962899   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.962910   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:45.962918   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:45.962979   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:45.996975   72122 cri.go:89] found id: ""
	I0910 19:02:45.997000   72122 logs.go:276] 0 containers: []
	W0910 19:02:45.997009   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:45.997014   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:45.997065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:46.032271   72122 cri.go:89] found id: ""
	I0910 19:02:46.032299   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.032309   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:46.032317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:46.032375   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:46.072629   72122 cri.go:89] found id: ""
	I0910 19:02:46.072654   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.072662   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:46.072667   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:46.072713   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:46.112196   72122 cri.go:89] found id: ""
	I0910 19:02:46.112220   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.112228   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:46.112233   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:46.112298   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:46.155700   72122 cri.go:89] found id: ""
	I0910 19:02:46.155732   72122 logs.go:276] 0 containers: []
	W0910 19:02:46.155745   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:46.155759   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:46.155794   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:46.210596   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:46.210624   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:46.224951   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:46.224980   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:46.294571   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:46.294597   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:46.294613   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:46.382431   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:46.382495   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:47.224808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:49.225392   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:51.227601   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.033643   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.535490   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.665543   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:50.666596   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:48.926582   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:48.941256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:48.941338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:48.979810   72122 cri.go:89] found id: ""
	I0910 19:02:48.979842   72122 logs.go:276] 0 containers: []
	W0910 19:02:48.979849   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:48.979856   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:48.979917   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:49.015083   72122 cri.go:89] found id: ""
	I0910 19:02:49.015126   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.015136   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:49.015144   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:49.015205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:49.052417   72122 cri.go:89] found id: ""
	I0910 19:02:49.052445   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.052453   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:49.052459   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:49.052511   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:49.092485   72122 cri.go:89] found id: ""
	I0910 19:02:49.092523   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.092533   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:49.092538   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:49.092588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:49.127850   72122 cri.go:89] found id: ""
	I0910 19:02:49.127882   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.127889   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:49.127897   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:49.127952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:49.160693   72122 cri.go:89] found id: ""
	I0910 19:02:49.160724   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.160733   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:49.160740   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:49.160798   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:49.194713   72122 cri.go:89] found id: ""
	I0910 19:02:49.194737   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.194744   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:49.194750   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:49.194804   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:49.229260   72122 cri.go:89] found id: ""
	I0910 19:02:49.229283   72122 logs.go:276] 0 containers: []
	W0910 19:02:49.229292   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:49.229303   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:49.229320   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:49.281963   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:49.281992   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:49.294789   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:49.294809   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:49.366126   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:49.366152   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:49.366172   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:49.451187   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:49.451225   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:51.990361   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:52.003744   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:52.003807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:52.036794   72122 cri.go:89] found id: ""
	I0910 19:02:52.036824   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.036834   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:52.036840   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:52.036896   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:52.074590   72122 cri.go:89] found id: ""
	I0910 19:02:52.074613   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.074620   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:52.074625   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:52.074675   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:52.119926   72122 cri.go:89] found id: ""
	I0910 19:02:52.119967   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.119981   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:52.119990   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:52.120075   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:52.157862   72122 cri.go:89] found id: ""
	I0910 19:02:52.157889   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.157900   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:52.157906   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:52.157968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:52.198645   72122 cri.go:89] found id: ""
	I0910 19:02:52.198675   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.198686   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:52.198693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:52.198756   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:52.240091   72122 cri.go:89] found id: ""
	I0910 19:02:52.240113   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.240129   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:52.240139   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:52.240197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:52.275046   72122 cri.go:89] found id: ""
	I0910 19:02:52.275079   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.275090   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:52.275098   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:52.275179   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:52.311141   72122 cri.go:89] found id: ""
	I0910 19:02:52.311172   72122 logs.go:276] 0 containers: []
	W0910 19:02:52.311184   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:52.311196   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:52.311211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:52.400004   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:52.400039   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:52.449043   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:52.449090   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:53.724151   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:56.223353   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.033328   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.035259   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:53.164639   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:55.165714   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:52.502304   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:52.502336   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:52.518747   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:52.518772   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:52.593581   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.094092   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:55.108752   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:55.108830   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:55.143094   72122 cri.go:89] found id: ""
	I0910 19:02:55.143122   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.143133   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:55.143141   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:55.143198   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:55.184298   72122 cri.go:89] found id: ""
	I0910 19:02:55.184326   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.184334   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:55.184340   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:55.184397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:55.216557   72122 cri.go:89] found id: ""
	I0910 19:02:55.216585   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.216596   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:55.216613   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:55.216676   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:55.251049   72122 cri.go:89] found id: ""
	I0910 19:02:55.251075   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.251083   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:55.251090   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:55.251152   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:55.282689   72122 cri.go:89] found id: ""
	I0910 19:02:55.282716   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.282724   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:55.282729   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:55.282800   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:55.316959   72122 cri.go:89] found id: ""
	I0910 19:02:55.316993   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.317004   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:55.317011   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:55.317085   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:55.353110   72122 cri.go:89] found id: ""
	I0910 19:02:55.353134   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.353143   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:55.353149   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:55.353205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:55.392391   72122 cri.go:89] found id: ""
	I0910 19:02:55.392422   72122 logs.go:276] 0 containers: []
	W0910 19:02:55.392434   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:55.392446   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:55.392461   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:55.445431   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:55.445469   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:55.459348   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:55.459374   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:55.528934   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:55.528957   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:55.528973   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:55.610797   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:55.610833   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:02:58.223882   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.223951   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.533754   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:59.535018   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:01.535255   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:57.667276   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:00.164510   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:02:58.152775   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:02:58.166383   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:02:58.166440   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:02:58.203198   72122 cri.go:89] found id: ""
	I0910 19:02:58.203225   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.203233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:02:58.203239   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:02:58.203284   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:02:58.240538   72122 cri.go:89] found id: ""
	I0910 19:02:58.240560   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.240567   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:02:58.240573   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:02:58.240633   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:02:58.274802   72122 cri.go:89] found id: ""
	I0910 19:02:58.274826   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.274833   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:02:58.274839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:02:58.274886   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:02:58.311823   72122 cri.go:89] found id: ""
	I0910 19:02:58.311857   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.311868   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:02:58.311876   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:02:58.311933   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:02:58.347260   72122 cri.go:89] found id: ""
	I0910 19:02:58.347281   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.347288   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:02:58.347294   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:02:58.347338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:02:58.382621   72122 cri.go:89] found id: ""
	I0910 19:02:58.382645   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.382655   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:02:58.382662   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:02:58.382720   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:02:58.418572   72122 cri.go:89] found id: ""
	I0910 19:02:58.418597   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.418605   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:02:58.418611   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:02:58.418663   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:02:58.459955   72122 cri.go:89] found id: ""
	I0910 19:02:58.459987   72122 logs.go:276] 0 containers: []
	W0910 19:02:58.459995   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:02:58.460003   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:02:58.460016   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:02:58.512831   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:02:58.512866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:02:58.527036   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:02:58.527067   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:02:58.593329   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:02:58.593350   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:02:58.593366   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:02:58.671171   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:02:58.671201   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.211905   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:01.226567   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:01.226724   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:01.261860   72122 cri.go:89] found id: ""
	I0910 19:03:01.261885   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.261893   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:01.261898   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:01.261946   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:01.294754   72122 cri.go:89] found id: ""
	I0910 19:03:01.294774   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.294781   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:01.294786   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:01.294833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:01.328378   72122 cri.go:89] found id: ""
	I0910 19:03:01.328403   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.328412   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:01.328417   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:01.328465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:01.363344   72122 cri.go:89] found id: ""
	I0910 19:03:01.363370   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.363380   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:01.363388   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:01.363446   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:01.398539   72122 cri.go:89] found id: ""
	I0910 19:03:01.398576   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.398586   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:01.398593   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:01.398654   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:01.431367   72122 cri.go:89] found id: ""
	I0910 19:03:01.431390   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.431397   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:01.431403   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:01.431458   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:01.464562   72122 cri.go:89] found id: ""
	I0910 19:03:01.464589   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.464599   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:01.464606   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:01.464666   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:01.497493   72122 cri.go:89] found id: ""
	I0910 19:03:01.497520   72122 logs.go:276] 0 containers: []
	W0910 19:03:01.497531   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:01.497540   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:01.497555   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:01.583083   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:01.583140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:01.624887   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:01.624919   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:01.676124   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:01.676155   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:01.690861   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:01.690894   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:01.763695   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:02.724017   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.725049   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.033371   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:06.033600   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:02.666137   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.669740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.164822   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:04.264867   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:04.279106   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:04.279176   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:04.315358   72122 cri.go:89] found id: ""
	I0910 19:03:04.315390   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.315398   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:04.315403   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:04.315457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:04.359466   72122 cri.go:89] found id: ""
	I0910 19:03:04.359489   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.359496   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:04.359504   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:04.359563   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:04.399504   72122 cri.go:89] found id: ""
	I0910 19:03:04.399529   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.399538   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:04.399545   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:04.399604   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:04.438244   72122 cri.go:89] found id: ""
	I0910 19:03:04.438269   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.438277   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:04.438282   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:04.438340   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:04.475299   72122 cri.go:89] found id: ""
	I0910 19:03:04.475321   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.475329   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:04.475334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:04.475386   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:04.516500   72122 cri.go:89] found id: ""
	I0910 19:03:04.516520   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.516529   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:04.516534   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:04.516588   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:04.551191   72122 cri.go:89] found id: ""
	I0910 19:03:04.551214   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.551222   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:04.551228   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:04.551273   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:04.585646   72122 cri.go:89] found id: ""
	I0910 19:03:04.585667   72122 logs.go:276] 0 containers: []
	W0910 19:03:04.585675   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:04.585684   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:04.585699   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:04.598832   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:04.598858   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:04.670117   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:04.670140   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:04.670156   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:04.746592   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:04.746626   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:04.784061   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:04.784088   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.337082   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:07.350696   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:07.350752   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:07.387344   72122 cri.go:89] found id: ""
	I0910 19:03:07.387373   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.387384   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:07.387391   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:07.387449   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:07.420468   72122 cri.go:89] found id: ""
	I0910 19:03:07.420490   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.420498   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:07.420503   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:07.420566   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:07.453746   72122 cri.go:89] found id: ""
	I0910 19:03:07.453773   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.453784   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:07.453791   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:07.453845   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:07.487359   72122 cri.go:89] found id: ""
	I0910 19:03:07.487388   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.487400   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:07.487407   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:07.487470   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:07.223432   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.723164   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:08.033767   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:10.035613   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:09.165972   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:11.663740   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:07.520803   72122 cri.go:89] found id: ""
	I0910 19:03:07.520827   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.520834   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:07.520839   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:07.520898   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:07.556908   72122 cri.go:89] found id: ""
	I0910 19:03:07.556934   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.556945   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:07.556953   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:07.557017   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:07.596072   72122 cri.go:89] found id: ""
	I0910 19:03:07.596093   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.596102   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:07.596107   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:07.596165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:07.631591   72122 cri.go:89] found id: ""
	I0910 19:03:07.631620   72122 logs.go:276] 0 containers: []
	W0910 19:03:07.631630   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:07.631639   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:07.631661   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:07.683892   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:07.683923   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:07.697619   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:07.697645   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:07.766370   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:07.766397   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:07.766413   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:07.854102   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:07.854140   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:10.400185   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:10.412771   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:10.412842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:10.447710   72122 cri.go:89] found id: ""
	I0910 19:03:10.447739   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.447750   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:10.447757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:10.447822   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:10.480865   72122 cri.go:89] found id: ""
	I0910 19:03:10.480892   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.480902   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:10.480909   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:10.480966   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:10.514893   72122 cri.go:89] found id: ""
	I0910 19:03:10.514919   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.514927   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:10.514933   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:10.514994   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:10.556332   72122 cri.go:89] found id: ""
	I0910 19:03:10.556374   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.556385   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:10.556392   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:10.556457   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:10.590529   72122 cri.go:89] found id: ""
	I0910 19:03:10.590562   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.590573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:10.590581   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:10.590642   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:10.623697   72122 cri.go:89] found id: ""
	I0910 19:03:10.623724   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.623732   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:10.623737   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:10.623788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:10.659236   72122 cri.go:89] found id: ""
	I0910 19:03:10.659259   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.659270   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:10.659277   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:10.659338   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:10.693150   72122 cri.go:89] found id: ""
	I0910 19:03:10.693182   72122 logs.go:276] 0 containers: []
	W0910 19:03:10.693192   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:10.693202   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:10.693217   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:10.744624   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:10.744663   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:10.758797   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:10.758822   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:10.853796   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:10.853815   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:10.853827   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:10.937972   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:10.938019   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:11.724808   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:14.224052   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:12.535134   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:15.033867   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:17.034507   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.667548   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:16.164483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:13.481898   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:13.495440   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:13.495505   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:13.531423   72122 cri.go:89] found id: ""
	I0910 19:03:13.531452   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.531463   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:13.531470   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:13.531532   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:13.571584   72122 cri.go:89] found id: ""
	I0910 19:03:13.571607   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.571615   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:13.571620   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:13.571674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:13.609670   72122 cri.go:89] found id: ""
	I0910 19:03:13.609695   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.609702   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:13.609707   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:13.609761   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:13.644726   72122 cri.go:89] found id: ""
	I0910 19:03:13.644755   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.644766   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:13.644773   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:13.644831   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:13.679692   72122 cri.go:89] found id: ""
	I0910 19:03:13.679722   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.679733   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:13.679741   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:13.679791   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:13.717148   72122 cri.go:89] found id: ""
	I0910 19:03:13.717177   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.717186   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:13.717192   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:13.717247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:13.755650   72122 cri.go:89] found id: ""
	I0910 19:03:13.755676   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.755688   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:13.755693   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:13.755740   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:13.788129   72122 cri.go:89] found id: ""
	I0910 19:03:13.788158   72122 logs.go:276] 0 containers: []
	W0910 19:03:13.788169   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:13.788179   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:13.788194   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:13.865241   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:13.865277   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:13.909205   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:13.909233   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:13.963495   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:13.963523   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:13.977311   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:13.977337   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:14.047015   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:16.547505   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:16.568333   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:16.568412   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:16.610705   72122 cri.go:89] found id: ""
	I0910 19:03:16.610734   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.610744   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:16.610752   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:16.610808   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:16.647307   72122 cri.go:89] found id: ""
	I0910 19:03:16.647333   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.647340   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:16.647345   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:16.647409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:16.684513   72122 cri.go:89] found id: ""
	I0910 19:03:16.684536   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.684544   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:16.684549   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:16.684602   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:16.718691   72122 cri.go:89] found id: ""
	I0910 19:03:16.718719   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.718729   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:16.718734   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:16.718794   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:16.753250   72122 cri.go:89] found id: ""
	I0910 19:03:16.753279   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.753291   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:16.753298   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:16.753358   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:16.788953   72122 cri.go:89] found id: ""
	I0910 19:03:16.788984   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.789001   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:16.789009   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:16.789084   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:16.823715   72122 cri.go:89] found id: ""
	I0910 19:03:16.823746   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.823760   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:16.823767   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:16.823837   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:16.858734   72122 cri.go:89] found id: ""
	I0910 19:03:16.858758   72122 logs.go:276] 0 containers: []
	W0910 19:03:16.858770   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:16.858780   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:16.858795   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:16.897983   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:16.898012   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:16.950981   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:16.951015   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:16.964809   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:16.964839   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:17.039142   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:17.039163   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:17.039177   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:16.724218   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.223909   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.533783   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:21.534203   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:18.164708   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:20.664302   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:19.619941   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:19.634432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:19.634489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:19.671220   72122 cri.go:89] found id: ""
	I0910 19:03:19.671246   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.671256   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:19.671264   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:19.671322   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:19.704251   72122 cri.go:89] found id: ""
	I0910 19:03:19.704278   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.704294   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:19.704301   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:19.704347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:19.745366   72122 cri.go:89] found id: ""
	I0910 19:03:19.745393   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.745403   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:19.745410   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:19.745466   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:19.781100   72122 cri.go:89] found id: ""
	I0910 19:03:19.781129   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.781136   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:19.781141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:19.781195   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:19.817177   72122 cri.go:89] found id: ""
	I0910 19:03:19.817207   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.817219   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:19.817226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:19.817292   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:19.852798   72122 cri.go:89] found id: ""
	I0910 19:03:19.852829   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.852837   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:19.852842   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:19.852889   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:19.887173   72122 cri.go:89] found id: ""
	I0910 19:03:19.887200   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.887210   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:19.887219   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:19.887409   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:19.922997   72122 cri.go:89] found id: ""
	I0910 19:03:19.923026   72122 logs.go:276] 0 containers: []
	W0910 19:03:19.923038   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:19.923049   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:19.923063   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:19.975703   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:19.975736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:19.989834   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:19.989866   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:20.061312   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:20.061332   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:20.061344   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:20.143045   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:20.143080   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:21.723250   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:23.723771   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.724346   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:24.036790   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:26.533830   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.664756   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:25.164650   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:22.681900   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:22.694860   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:22.694923   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:22.738529   72122 cri.go:89] found id: ""
	I0910 19:03:22.738553   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.738563   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:22.738570   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:22.738640   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:22.778102   72122 cri.go:89] found id: ""
	I0910 19:03:22.778132   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.778143   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:22.778150   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:22.778207   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:22.813273   72122 cri.go:89] found id: ""
	I0910 19:03:22.813307   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.813320   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:22.813334   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:22.813397   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:22.849613   72122 cri.go:89] found id: ""
	I0910 19:03:22.849637   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.849646   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:22.849651   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:22.849701   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:22.883138   72122 cri.go:89] found id: ""
	I0910 19:03:22.883167   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.883178   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:22.883185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:22.883237   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:22.918521   72122 cri.go:89] found id: ""
	I0910 19:03:22.918550   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.918567   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:22.918574   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:22.918632   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:22.966657   72122 cri.go:89] found id: ""
	I0910 19:03:22.966684   72122 logs.go:276] 0 containers: []
	W0910 19:03:22.966691   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:22.966701   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:22.966762   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:23.022254   72122 cri.go:89] found id: ""
	I0910 19:03:23.022282   72122 logs.go:276] 0 containers: []
	W0910 19:03:23.022290   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:23.022298   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:23.022309   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:23.082347   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:23.082386   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:23.096792   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:23.096814   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:23.172720   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:23.172740   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:23.172754   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:23.256155   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:23.256193   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:25.797211   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:25.810175   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:25.810234   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:25.844848   72122 cri.go:89] found id: ""
	I0910 19:03:25.844876   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.844886   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:25.844901   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:25.844968   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:25.877705   72122 cri.go:89] found id: ""
	I0910 19:03:25.877736   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.877747   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:25.877755   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:25.877807   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:25.913210   72122 cri.go:89] found id: ""
	I0910 19:03:25.913238   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.913248   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:25.913256   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:25.913316   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:25.947949   72122 cri.go:89] found id: ""
	I0910 19:03:25.947974   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.947984   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:25.947991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:25.948050   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:25.983487   72122 cri.go:89] found id: ""
	I0910 19:03:25.983511   72122 logs.go:276] 0 containers: []
	W0910 19:03:25.983519   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:25.983524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:25.983573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:26.018176   72122 cri.go:89] found id: ""
	I0910 19:03:26.018201   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.018209   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:26.018214   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:26.018271   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:26.052063   72122 cri.go:89] found id: ""
	I0910 19:03:26.052087   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.052097   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:26.052104   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:26.052165   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:26.091919   72122 cri.go:89] found id: ""
	I0910 19:03:26.091949   72122 logs.go:276] 0 containers: []
	W0910 19:03:26.091958   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:26.091968   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:26.091983   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:26.146059   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:26.146094   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:26.160529   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:26.160562   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:26.230742   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:26.230764   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:26.230778   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:26.313191   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:26.313222   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:27.724922   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:30.223811   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.039957   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:31.533256   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:27.665626   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:29.666857   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:32.165153   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:28.858457   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:28.873725   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:28.873788   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:28.922685   72122 cri.go:89] found id: ""
	I0910 19:03:28.922717   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.922729   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:28.922737   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:28.922795   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:28.973236   72122 cri.go:89] found id: ""
	I0910 19:03:28.973260   72122 logs.go:276] 0 containers: []
	W0910 19:03:28.973270   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:28.973277   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:28.973339   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:29.008999   72122 cri.go:89] found id: ""
	I0910 19:03:29.009049   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.009062   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:29.009081   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:29.009148   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:29.049009   72122 cri.go:89] found id: ""
	I0910 19:03:29.049037   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.049047   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:29.049056   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:29.049131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:29.089543   72122 cri.go:89] found id: ""
	I0910 19:03:29.089564   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.089573   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:29.089578   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:29.089648   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:29.126887   72122 cri.go:89] found id: ""
	I0910 19:03:29.126911   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.126918   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:29.126924   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:29.126969   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:29.161369   72122 cri.go:89] found id: ""
	I0910 19:03:29.161395   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.161405   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:29.161412   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:29.161474   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:29.199627   72122 cri.go:89] found id: ""
	I0910 19:03:29.199652   72122 logs.go:276] 0 containers: []
	W0910 19:03:29.199661   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:29.199672   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:29.199691   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:29.268353   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:29.268386   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:29.268401   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:29.351470   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:29.351504   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:29.391768   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:29.391796   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:29.442705   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:29.442736   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:31.957567   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:31.970218   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:31.970274   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:32.004870   72122 cri.go:89] found id: ""
	I0910 19:03:32.004898   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.004908   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:32.004915   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:32.004971   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:32.045291   72122 cri.go:89] found id: ""
	I0910 19:03:32.045322   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.045331   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:32.045337   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:32.045403   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:32.085969   72122 cri.go:89] found id: ""
	I0910 19:03:32.085999   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.086007   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:32.086013   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:32.086067   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:32.120100   72122 cri.go:89] found id: ""
	I0910 19:03:32.120127   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.120135   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:32.120141   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:32.120187   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:32.153977   72122 cri.go:89] found id: ""
	I0910 19:03:32.154004   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.154011   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:32.154016   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:32.154065   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:32.195980   72122 cri.go:89] found id: ""
	I0910 19:03:32.196005   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.196013   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:32.196019   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:32.196068   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:32.233594   72122 cri.go:89] found id: ""
	I0910 19:03:32.233616   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.233623   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:32.233632   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:32.233677   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:32.268118   72122 cri.go:89] found id: ""
	I0910 19:03:32.268144   72122 logs.go:276] 0 containers: []
	W0910 19:03:32.268152   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:32.268160   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:32.268171   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:32.281389   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:32.281416   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:32.359267   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:32.359289   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:32.359304   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:32.445096   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:32.445137   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:32.483288   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:32.483325   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:32.224155   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.724191   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:33.537955   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.033801   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:34.663475   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:36.665627   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:35.040393   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:35.053698   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:35.053750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:35.087712   72122 cri.go:89] found id: ""
	I0910 19:03:35.087742   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.087751   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:35.087757   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:35.087802   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:35.125437   72122 cri.go:89] found id: ""
	I0910 19:03:35.125468   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.125482   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:35.125495   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:35.125562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:35.163885   72122 cri.go:89] found id: ""
	I0910 19:03:35.163914   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.163924   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:35.163931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:35.163989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:35.199426   72122 cri.go:89] found id: ""
	I0910 19:03:35.199459   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.199471   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:35.199479   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:35.199559   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:35.236388   72122 cri.go:89] found id: ""
	I0910 19:03:35.236408   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.236416   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:35.236421   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:35.236465   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:35.274797   72122 cri.go:89] found id: ""
	I0910 19:03:35.274817   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.274825   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:35.274830   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:35.274874   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:35.308127   72122 cri.go:89] found id: ""
	I0910 19:03:35.308155   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.308166   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:35.308173   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:35.308230   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:35.340675   72122 cri.go:89] found id: ""
	I0910 19:03:35.340697   72122 logs.go:276] 0 containers: []
	W0910 19:03:35.340704   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:35.340712   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:35.340727   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:35.390806   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:35.390842   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:35.404427   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:35.404458   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:35.471526   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:35.471560   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:35.471575   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:35.547469   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:35.547497   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:37.223464   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:39.224137   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.224189   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.534280   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:40.534728   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.666077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:41.165483   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:38.087127   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:38.100195   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:38.100251   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:38.135386   72122 cri.go:89] found id: ""
	I0910 19:03:38.135408   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.135416   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:38.135422   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:38.135480   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:38.168531   72122 cri.go:89] found id: ""
	I0910 19:03:38.168558   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.168568   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:38.168577   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:38.168639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:38.202931   72122 cri.go:89] found id: ""
	I0910 19:03:38.202958   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.202968   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:38.202974   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:38.203030   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:38.239185   72122 cri.go:89] found id: ""
	I0910 19:03:38.239209   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.239219   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:38.239226   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:38.239279   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:38.276927   72122 cri.go:89] found id: ""
	I0910 19:03:38.276952   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.276961   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:38.276967   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:38.277035   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:38.311923   72122 cri.go:89] found id: ""
	I0910 19:03:38.311951   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.311962   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:38.311971   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:38.312034   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:38.344981   72122 cri.go:89] found id: ""
	I0910 19:03:38.345012   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.345023   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:38.345030   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:38.345099   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:38.378012   72122 cri.go:89] found id: ""
	I0910 19:03:38.378037   72122 logs.go:276] 0 containers: []
	W0910 19:03:38.378048   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:38.378058   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:38.378076   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:38.449361   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:38.449384   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:38.449396   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:38.530683   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:38.530713   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:38.570047   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:38.570073   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:38.620143   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:38.620176   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.134152   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:41.148416   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:41.148509   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:41.186681   72122 cri.go:89] found id: ""
	I0910 19:03:41.186706   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.186713   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:41.186719   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:41.186767   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:41.221733   72122 cri.go:89] found id: ""
	I0910 19:03:41.221758   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.221769   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:41.221776   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:41.221834   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:41.256099   72122 cri.go:89] found id: ""
	I0910 19:03:41.256125   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.256136   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:41.256143   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:41.256194   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:41.289825   72122 cri.go:89] found id: ""
	I0910 19:03:41.289850   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.289860   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:41.289867   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:41.289926   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:41.323551   72122 cri.go:89] found id: ""
	I0910 19:03:41.323581   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.323594   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:41.323601   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:41.323659   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:41.356508   72122 cri.go:89] found id: ""
	I0910 19:03:41.356535   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.356546   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:41.356553   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:41.356608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:41.391556   72122 cri.go:89] found id: ""
	I0910 19:03:41.391579   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.391586   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:41.391592   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:41.391651   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:41.427685   72122 cri.go:89] found id: ""
	I0910 19:03:41.427711   72122 logs.go:276] 0 containers: []
	W0910 19:03:41.427726   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:41.427743   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:41.427758   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:41.481970   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:41.482001   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:41.495266   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:41.495290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:41.568334   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:41.568357   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:41.568370   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:41.650178   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:41.650211   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:43.724494   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:46.223803   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.034100   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.035091   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:43.167877   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:45.664633   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:44.193665   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:44.209118   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:44.209197   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:44.245792   72122 cri.go:89] found id: ""
	I0910 19:03:44.245819   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.245829   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:44.245834   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:44.245900   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:44.285673   72122 cri.go:89] found id: ""
	I0910 19:03:44.285699   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.285711   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:44.285719   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:44.285787   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:44.326471   72122 cri.go:89] found id: ""
	I0910 19:03:44.326495   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.326505   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:44.326520   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:44.326589   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:44.367864   72122 cri.go:89] found id: ""
	I0910 19:03:44.367890   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.367898   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:44.367907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:44.367954   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:44.407161   72122 cri.go:89] found id: ""
	I0910 19:03:44.407185   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.407193   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:44.407198   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:44.407256   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:44.446603   72122 cri.go:89] found id: ""
	I0910 19:03:44.446628   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.446638   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:44.446645   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:44.446705   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:44.486502   72122 cri.go:89] found id: ""
	I0910 19:03:44.486526   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.486536   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:44.486543   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:44.486605   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:44.524992   72122 cri.go:89] found id: ""
	I0910 19:03:44.525017   72122 logs.go:276] 0 containers: []
	W0910 19:03:44.525025   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:44.525033   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:44.525044   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:44.579387   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:44.579418   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:44.594045   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:44.594070   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:44.678857   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:44.678883   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:44.678897   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:44.763799   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:44.763830   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:47.305631   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:47.319275   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:47.319347   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:47.359199   72122 cri.go:89] found id: ""
	I0910 19:03:47.359222   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.359233   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:47.359240   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:47.359300   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:47.397579   72122 cri.go:89] found id: ""
	I0910 19:03:47.397602   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.397610   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:47.397616   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:47.397674   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:47.431114   72122 cri.go:89] found id: ""
	I0910 19:03:47.431138   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.431146   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:47.431151   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:47.431205   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:47.470475   72122 cri.go:89] found id: ""
	I0910 19:03:47.470499   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.470509   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:47.470515   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:47.470573   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:48.227625   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.725421   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.534967   71529 pod_ready.go:103] pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:49.027864   71529 pod_ready.go:82] duration metric: took 4m0.000448579s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:49.027890   71529 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-w8rqv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0910 19:03:49.027905   71529 pod_ready.go:39] duration metric: took 4m14.536052937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:49.027929   71529 kubeadm.go:597] duration metric: took 4m22.283340761s to restartPrimaryControlPlane
	W0910 19:03:49.027982   71529 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:03:49.028009   71529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:03:47.668029   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:50.164077   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:47.504484   72122 cri.go:89] found id: ""
	I0910 19:03:47.504509   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.504518   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:47.504524   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:47.504577   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:47.541633   72122 cri.go:89] found id: ""
	I0910 19:03:47.541651   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.541658   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:47.541663   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:47.541706   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:47.579025   72122 cri.go:89] found id: ""
	I0910 19:03:47.579051   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.579060   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:47.579068   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:47.579123   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:47.612333   72122 cri.go:89] found id: ""
	I0910 19:03:47.612359   72122 logs.go:276] 0 containers: []
	W0910 19:03:47.612370   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:47.612380   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:47.612395   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:47.667214   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:47.667242   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:47.683425   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:47.683466   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:47.749510   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:47.749531   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:47.749543   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:47.830454   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:47.830487   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:50.373207   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:50.387191   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:50.387247   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:50.422445   72122 cri.go:89] found id: ""
	I0910 19:03:50.422476   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.422488   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:50.422495   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:50.422562   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:50.456123   72122 cri.go:89] found id: ""
	I0910 19:03:50.456145   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.456153   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:50.456157   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:50.456211   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:50.488632   72122 cri.go:89] found id: ""
	I0910 19:03:50.488661   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.488672   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:50.488680   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:50.488736   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:50.523603   72122 cri.go:89] found id: ""
	I0910 19:03:50.523628   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.523636   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:50.523641   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:50.523699   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:50.559741   72122 cri.go:89] found id: ""
	I0910 19:03:50.559765   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.559773   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:50.559778   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:50.559842   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:50.595387   72122 cri.go:89] found id: ""
	I0910 19:03:50.595406   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.595414   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:50.595420   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:50.595472   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:50.628720   72122 cri.go:89] found id: ""
	I0910 19:03:50.628747   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.628767   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:50.628774   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:50.628833   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:50.660635   72122 cri.go:89] found id: ""
	I0910 19:03:50.660655   72122 logs.go:276] 0 containers: []
	W0910 19:03:50.660663   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:50.660671   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:50.660683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:50.716517   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:50.716544   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:50.731411   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:50.731443   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:50.799252   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:50.799275   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:50.799290   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:50.874490   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:50.874524   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.222989   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225335   71627 pod_ready.go:103] pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.225365   71627 pod_ready.go:82] duration metric: took 4m0.007907353s for pod "metrics-server-6867b74b74-4sfwg" in "kube-system" namespace to be "Ready" ...
	E0910 19:03:55.225523   71627 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:03:55.225534   71627 pod_ready.go:39] duration metric: took 4m2.40870138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:03:55.225551   71627 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:03:55.225579   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:55.225629   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:55.270742   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:55.270761   71627 cri.go:89] found id: ""
	I0910 19:03:55.270768   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:55.270811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.276233   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:55.276283   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:55.316033   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:55.316051   71627 cri.go:89] found id: ""
	I0910 19:03:55.316058   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:55.316103   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.320441   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:55.320494   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:55.354406   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.354428   71627 cri.go:89] found id: ""
	I0910 19:03:55.354435   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:55.354482   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.358553   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:55.358621   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:55.393871   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.393896   71627 cri.go:89] found id: ""
	I0910 19:03:55.393904   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:55.393959   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.398102   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:55.398154   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:55.432605   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.432625   71627 cri.go:89] found id: ""
	I0910 19:03:55.432632   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:55.432686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.437631   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:55.437689   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:55.474250   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.474277   71627 cri.go:89] found id: ""
	I0910 19:03:55.474287   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:55.474352   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.479177   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:55.479235   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:55.514918   71627 cri.go:89] found id: ""
	I0910 19:03:55.514942   71627 logs.go:276] 0 containers: []
	W0910 19:03:55.514951   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:55.514956   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:55.515014   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:55.549310   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.549330   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.549335   71627 cri.go:89] found id: ""
	I0910 19:03:55.549347   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:55.549404   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.553420   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:55.557502   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:55.557531   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:55.592661   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:55.592685   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:55.629876   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:55.629908   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:55.668935   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:55.668963   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:55.685881   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:55.685906   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:55.815552   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:55.815578   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:55.854615   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:55.854640   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:55.906027   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:55.906069   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:55.943771   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:55.943808   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:52.666368   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:55.165213   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:53.417835   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:53.430627   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:53.430694   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:53.469953   72122 cri.go:89] found id: ""
	I0910 19:03:53.469981   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.469992   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:53.469999   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:53.470060   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:53.503712   72122 cri.go:89] found id: ""
	I0910 19:03:53.503739   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.503750   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:53.503757   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:53.503814   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:53.539875   72122 cri.go:89] found id: ""
	I0910 19:03:53.539895   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.539902   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:53.539907   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:53.539952   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:53.575040   72122 cri.go:89] found id: ""
	I0910 19:03:53.575067   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.575078   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:53.575085   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:53.575159   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:53.611171   72122 cri.go:89] found id: ""
	I0910 19:03:53.611193   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.611201   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:53.611206   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:53.611253   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:53.644467   72122 cri.go:89] found id: ""
	I0910 19:03:53.644494   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.644505   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:53.644513   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:53.644575   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:53.680886   72122 cri.go:89] found id: ""
	I0910 19:03:53.680913   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.680924   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:53.680931   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:53.680989   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:53.716834   72122 cri.go:89] found id: ""
	I0910 19:03:53.716863   72122 logs.go:276] 0 containers: []
	W0910 19:03:53.716875   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:53.716885   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:53.716900   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:53.755544   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:53.755568   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:53.807382   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:53.807411   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:53.820289   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:53.820311   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:53.891500   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:53.891524   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:53.891540   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:56.472368   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:56.491939   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:56.492020   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:56.535575   72122 cri.go:89] found id: ""
	I0910 19:03:56.535603   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.535614   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:56.535620   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:56.535672   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:56.570366   72122 cri.go:89] found id: ""
	I0910 19:03:56.570390   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.570398   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:56.570403   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:56.570452   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:56.609486   72122 cri.go:89] found id: ""
	I0910 19:03:56.609524   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.609535   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:56.609542   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:56.609608   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:56.650268   72122 cri.go:89] found id: ""
	I0910 19:03:56.650295   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.650305   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:56.650312   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:56.650371   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:56.689113   72122 cri.go:89] found id: ""
	I0910 19:03:56.689139   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.689146   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:56.689154   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:56.689214   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:56.721546   72122 cri.go:89] found id: ""
	I0910 19:03:56.721568   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.721576   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:56.721582   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:56.721639   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:56.753149   72122 cri.go:89] found id: ""
	I0910 19:03:56.753171   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.753179   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:56.753185   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:56.753233   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:56.786624   72122 cri.go:89] found id: ""
	I0910 19:03:56.786648   72122 logs.go:276] 0 containers: []
	W0910 19:03:56.786658   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:56.786669   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.786683   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.840243   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:56.840276   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:56.854453   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:56.854475   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:03:56.928814   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:03:56.928835   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:56.928849   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:03:57.012360   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:03:57.012403   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.443638   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:03:56.443684   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:03:56.498856   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:56.498897   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:56.573520   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:56.573548   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:56.621270   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:56.621301   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.173747   71627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.190441   71627 api_server.go:72] duration metric: took 4m14.110101643s to wait for apiserver process to appear ...
	I0910 19:03:59.190463   71627 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:03:59.190495   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.190539   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.224716   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.224744   71627 cri.go:89] found id: ""
	I0910 19:03:59.224753   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:03:59.224811   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.229345   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.229412   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.263589   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.263622   71627 cri.go:89] found id: ""
	I0910 19:03:59.263630   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:03:59.263686   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.269664   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.269728   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.312201   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.312224   71627 cri.go:89] found id: ""
	I0910 19:03:59.312233   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:03:59.312288   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.317991   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.318067   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.360625   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.360650   71627 cri.go:89] found id: ""
	I0910 19:03:59.360657   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:03:59.360707   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.364948   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.365010   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.404075   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.404096   71627 cri.go:89] found id: ""
	I0910 19:03:59.404103   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:03:59.404149   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.408098   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.408141   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.443767   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.443792   71627 cri.go:89] found id: ""
	I0910 19:03:59.443802   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:03:59.443858   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.448348   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.448397   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.485373   71627 cri.go:89] found id: ""
	I0910 19:03:59.485401   71627 logs.go:276] 0 containers: []
	W0910 19:03:59.485409   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.485414   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:03:59.485470   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:03:59.522641   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.522660   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.522664   71627 cri.go:89] found id: ""
	I0910 19:03:59.522671   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:03:59.522726   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.527283   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:03:59.531256   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:03:59.531275   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:03:59.576358   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:03:59.576382   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:03:59.625938   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:03:59.625974   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:03:59.664362   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:03:59.664386   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:03:59.718655   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:03:59.718686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:03:59.763954   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.763984   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.785217   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:03:59.785248   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:03:59.836560   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:03:59.836604   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:03:59.878973   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:03:59.879001   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:03:59.929851   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:03:59.929878   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.400346   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.400384   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:00.442281   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:00.442307   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:00.510448   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:00.510480   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:03:57.665980   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.666054   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:01.668052   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:03:59.558561   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:03:59.572993   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:03:59.573094   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:03:59.618957   72122 cri.go:89] found id: ""
	I0910 19:03:59.618988   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.618999   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:03:59.619008   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:03:59.619072   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:03:59.662544   72122 cri.go:89] found id: ""
	I0910 19:03:59.662643   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.662661   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:03:59.662673   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:03:59.662750   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:03:59.704323   72122 cri.go:89] found id: ""
	I0910 19:03:59.704349   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.704360   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:03:59.704367   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:03:59.704426   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:03:59.738275   72122 cri.go:89] found id: ""
	I0910 19:03:59.738301   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.738311   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:03:59.738317   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:03:59.738367   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:03:59.778887   72122 cri.go:89] found id: ""
	I0910 19:03:59.778922   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.778934   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:03:59.778944   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:03:59.779010   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:03:59.814953   72122 cri.go:89] found id: ""
	I0910 19:03:59.814985   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.814995   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:03:59.815003   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:03:59.815064   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:03:59.850016   72122 cri.go:89] found id: ""
	I0910 19:03:59.850048   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.850061   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:03:59.850069   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:03:59.850131   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:03:59.887546   72122 cri.go:89] found id: ""
	I0910 19:03:59.887589   72122 logs.go:276] 0 containers: []
	W0910 19:03:59.887600   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:03:59.887613   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:03:59.887632   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:03:59.938761   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:03:59.938784   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:03:59.954572   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:03:59.954603   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:04:00.029593   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:04:00.029622   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:00.029638   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:00.121427   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:04:00.121462   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:02.660924   72122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:02.674661   72122 kubeadm.go:597] duration metric: took 4m3.166175956s to restartPrimaryControlPlane
	W0910 19:04:02.674744   72122 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0910 19:04:02.674769   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:04:03.133507   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:03.150426   72122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:03.161678   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:03.173362   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:03.173389   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:03.173436   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:03.183872   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:03.183934   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:03.193891   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:03.203385   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:03.203450   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:03.216255   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.227938   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:03.228001   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:03.240799   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:03.252871   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:03.252922   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:03.263682   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:03.337478   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:04:03.337564   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:03.506276   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:03.506454   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:03.506587   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:04:03.697062   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:03.698908   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:03.699004   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:03.699083   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:03.699184   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:03.699270   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:03.699361   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:03.699517   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:03.700040   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:03.700773   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:03.701529   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:03.702334   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:03.702627   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:03.702715   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:03.929760   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:03.992724   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:04.087552   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:04.226550   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:04.244695   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:04.246125   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:04.246187   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:04.396099   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:03.107779   71627 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I0910 19:04:03.112394   71627 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I0910 19:04:03.113347   71627 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:03.113367   71627 api_server.go:131] duration metric: took 3.922898577s to wait for apiserver health ...
	I0910 19:04:03.113375   71627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:03.113399   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:03.113443   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:03.153182   71627 cri.go:89] found id: "1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.153204   71627 cri.go:89] found id: ""
	I0910 19:04:03.153213   71627 logs.go:276] 1 containers: [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a]
	I0910 19:04:03.153263   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.157842   71627 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:03.157906   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:03.199572   71627 cri.go:89] found id: "f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:03.199594   71627 cri.go:89] found id: ""
	I0910 19:04:03.199604   71627 logs.go:276] 1 containers: [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade]
	I0910 19:04:03.199658   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.204332   71627 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:03.204409   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:03.252660   71627 cri.go:89] found id: "24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.252686   71627 cri.go:89] found id: ""
	I0910 19:04:03.252696   71627 logs.go:276] 1 containers: [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100]
	I0910 19:04:03.252751   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.257850   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:03.257915   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:03.300208   71627 cri.go:89] found id: "1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:03.300226   71627 cri.go:89] found id: ""
	I0910 19:04:03.300235   71627 logs.go:276] 1 containers: [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68]
	I0910 19:04:03.300294   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.304875   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:03.304953   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:03.346705   71627 cri.go:89] found id: "48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.346734   71627 cri.go:89] found id: ""
	I0910 19:04:03.346744   71627 logs.go:276] 1 containers: [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27]
	I0910 19:04:03.346807   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.351246   71627 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:03.351314   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:03.391218   71627 cri.go:89] found id: "55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.391240   71627 cri.go:89] found id: ""
	I0910 19:04:03.391247   71627 logs.go:276] 1 containers: [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20]
	I0910 19:04:03.391290   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.396156   71627 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:03.396264   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:03.437436   71627 cri.go:89] found id: ""
	I0910 19:04:03.437464   71627 logs.go:276] 0 containers: []
	W0910 19:04:03.437473   71627 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:03.437479   71627 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:03.437551   71627 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:03.476396   71627 cri.go:89] found id: "b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.476417   71627 cri.go:89] found id: "173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.476420   71627 cri.go:89] found id: ""
	I0910 19:04:03.476427   71627 logs.go:276] 2 containers: [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9]
	I0910 19:04:03.476481   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.480969   71627 ssh_runner.go:195] Run: which crictl
	I0910 19:04:03.485821   71627 logs.go:123] Gathering logs for container status ...
	I0910 19:04:03.485843   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:03.537042   71627 logs.go:123] Gathering logs for kube-apiserver [1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a] ...
	I0910 19:04:03.537079   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e3f86c05b5fff6582f84a8d6b3b28e9aeaa6e8aa93f8d313fa10fed2a039b2a"
	I0910 19:04:03.599059   71627 logs.go:123] Gathering logs for coredns [24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100] ...
	I0910 19:04:03.599102   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24f8e4dfaa105343a29354d940aeea8c3a93748a67d6d5e7ca422458e5ec2100"
	I0910 19:04:03.637541   71627 logs.go:123] Gathering logs for kube-proxy [48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27] ...
	I0910 19:04:03.637576   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48c0a781fcf3403956e2a3abf047fe793089aca9674894973a99ff96bb0dcf27"
	I0910 19:04:03.682203   71627 logs.go:123] Gathering logs for kube-controller-manager [55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20] ...
	I0910 19:04:03.682234   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55624c2cb31c227b9f56cf7d15fbf5396b471a3f11fb6340e653fe0884d7da20"
	I0910 19:04:03.734965   71627 logs.go:123] Gathering logs for storage-provisioner [173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9] ...
	I0910 19:04:03.734992   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 173c9f8505ac0b1aaed174024931b46c3a962861c1ea78bf473286f727d58ef9"
	I0910 19:04:03.769711   71627 logs.go:123] Gathering logs for storage-provisioner [b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391] ...
	I0910 19:04:03.769738   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e0e8df9acc911d26c9b2398843a5d18e71a59069bade07db4abf2f75e9c391"
	I0910 19:04:03.805970   71627 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:03.805999   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:04.165756   71627 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:04.165796   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:04.254572   71627 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:04.254609   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:04.272637   71627 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:04.272686   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:04.421716   71627 logs.go:123] Gathering logs for etcd [f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade] ...
	I0910 19:04:04.421756   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3db63297412d6cec689203a4d269fc18d98790856dea894dee74e3ceb635ade"
	I0910 19:04:04.476657   71627 logs.go:123] Gathering logs for kube-scheduler [1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68] ...
	I0910 19:04:04.476701   71627 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a520241ca1176a40bff2b6eb692a92d53027646c98f594c37eb36ec91c01e68"
	I0910 19:04:07.038592   71627 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:07.038618   71627 system_pods.go:61] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.038624   71627 system_pods.go:61] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.038628   71627 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.038632   71627 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.038636   71627 system_pods.go:61] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.038639   71627 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.038644   71627 system_pods.go:61] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.038651   71627 system_pods.go:61] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.038658   71627 system_pods.go:74] duration metric: took 3.925277367s to wait for pod list to return data ...
	I0910 19:04:07.038667   71627 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:07.040831   71627 default_sa.go:45] found service account: "default"
	I0910 19:04:07.040854   71627 default_sa.go:55] duration metric: took 2.180848ms for default service account to be created ...
	I0910 19:04:07.040864   71627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:07.045130   71627 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:07.045151   71627 system_pods.go:89] "coredns-6f6b679f8f-nq9fl" [87dcf9d3-db33-4339-bf8e-fd16ba7b1d5f] Running
	I0910 19:04:07.045157   71627 system_pods.go:89] "etcd-default-k8s-diff-port-557504" [484ce7e2-e5b6-4b96-928e-c4519e072425] Running
	I0910 19:04:07.045162   71627 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-557504" [55a71e48-2cca-484c-8186-176494f8158c] Running
	I0910 19:04:07.045167   71627 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-557504" [98e7866a-c4a1-4b90-a758-506502405feb] Running
	I0910 19:04:07.045171   71627 system_pods.go:89] "kube-proxy-4t8r9" [aca739fc-0169-433b-85f1-17bf3ab538cb] Running
	I0910 19:04:07.045175   71627 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-557504" [eaa24622-2f9c-47bd-b002-173d5fb06126] Running
	I0910 19:04:07.045180   71627 system_pods.go:89] "metrics-server-6867b74b74-4sfwg" [6b5d0161-6a62-4752-b714-ada6b3772956] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:07.045184   71627 system_pods.go:89] "storage-provisioner" [7536d42b-90f4-44de-a7ba-652f8e535304] Running
	I0910 19:04:07.045191   71627 system_pods.go:126] duration metric: took 4.321406ms to wait for k8s-apps to be running ...
	I0910 19:04:07.045200   71627 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:07.045242   71627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:07.061292   71627 system_svc.go:56] duration metric: took 16.084643ms WaitForService to wait for kubelet
	I0910 19:04:07.061318   71627 kubeadm.go:582] duration metric: took 4m21.980981405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:07.061342   71627 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:07.064260   71627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:07.064277   71627 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:07.064288   71627 node_conditions.go:105] duration metric: took 2.940712ms to run NodePressure ...
	I0910 19:04:07.064298   71627 start.go:241] waiting for startup goroutines ...
	I0910 19:04:07.064308   71627 start.go:246] waiting for cluster config update ...
	I0910 19:04:07.064318   71627 start.go:255] writing updated cluster config ...
	I0910 19:04:07.064627   71627 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:07.109814   71627 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:07.111804   71627 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-557504" cluster and "default" namespace by default
	I0910 19:04:04.165083   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:06.663618   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:04.397627   72122 out.go:235]   - Booting up control plane ...
	I0910 19:04:04.397763   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:04.405199   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:04.407281   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:04.408182   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:04.411438   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:04:08.667046   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:11.164622   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.461731   71529 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.433698154s)
	I0910 19:04:15.461801   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:15.483515   71529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 19:04:15.497133   71529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:04:15.513903   71529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:04:15.513924   71529 kubeadm.go:157] found existing configuration files:
	
	I0910 19:04:15.513972   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:04:15.524468   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:04:15.524529   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:04:15.534726   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:04:15.544892   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:04:15.544944   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:04:15.554663   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.564884   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:04:15.564978   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:04:15.574280   71529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:04:15.583882   71529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:04:15.583932   71529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:04:15.593971   71529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:04:15.639220   71529 kubeadm.go:310] W0910 19:04:15.612221    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.641412   71529 kubeadm.go:310] W0910 19:04:15.614470    3037 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 19:04:15.749471   71529 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:04:13.164865   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:15.165232   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:17.664384   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:19.664943   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:22.166309   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:24.300945   71529 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 19:04:24.301016   71529 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:04:24.301143   71529 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:04:24.301274   71529 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:04:24.301408   71529 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 19:04:24.301517   71529 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:04:24.302988   71529 out.go:235]   - Generating certificates and keys ...
	I0910 19:04:24.303079   71529 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:04:24.303132   71529 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:04:24.303197   71529 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:04:24.303252   71529 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:04:24.303315   71529 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:04:24.303367   71529 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:04:24.303443   71529 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:04:24.303517   71529 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:04:24.303631   71529 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:04:24.303737   71529 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:04:24.303792   71529 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:04:24.303873   71529 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:04:24.303954   71529 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:04:24.304037   71529 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 19:04:24.304120   71529 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:04:24.304217   71529 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:04:24.304299   71529 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:04:24.304423   71529 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:04:24.304523   71529 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:04:24.305839   71529 out.go:235]   - Booting up control plane ...
	I0910 19:04:24.305946   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:04:24.306046   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:04:24.306123   71529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:04:24.306254   71529 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:04:24.306338   71529 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:04:24.306387   71529 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:04:24.306507   71529 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 19:04:24.306608   71529 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 19:04:24.306679   71529 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.526264ms
	I0910 19:04:24.306748   71529 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 19:04:24.306801   71529 kubeadm.go:310] [api-check] The API server is healthy after 5.501960865s
	I0910 19:04:24.306887   71529 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 19:04:24.306997   71529 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 19:04:24.307045   71529 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 19:04:24.307202   71529 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-347802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 19:04:24.307250   71529 kubeadm.go:310] [bootstrap-token] Using token: 3uw8fx.h3bliquui6tuj5mh
	I0910 19:04:24.308589   71529 out.go:235]   - Configuring RBAC rules ...
	I0910 19:04:24.308728   71529 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 19:04:24.308847   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 19:04:24.309021   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 19:04:24.309197   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 19:04:24.309330   71529 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 19:04:24.309437   71529 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 19:04:24.309612   71529 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 19:04:24.309681   71529 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 19:04:24.309776   71529 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 19:04:24.309787   71529 kubeadm.go:310] 
	I0910 19:04:24.309865   71529 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 19:04:24.309874   71529 kubeadm.go:310] 
	I0910 19:04:24.309951   71529 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 19:04:24.309963   71529 kubeadm.go:310] 
	I0910 19:04:24.309984   71529 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 19:04:24.310033   71529 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 19:04:24.310085   71529 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 19:04:24.310091   71529 kubeadm.go:310] 
	I0910 19:04:24.310152   71529 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 19:04:24.310164   71529 kubeadm.go:310] 
	I0910 19:04:24.310203   71529 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 19:04:24.310214   71529 kubeadm.go:310] 
	I0910 19:04:24.310262   71529 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 19:04:24.310326   71529 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 19:04:24.310383   71529 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 19:04:24.310390   71529 kubeadm.go:310] 
	I0910 19:04:24.310457   71529 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 19:04:24.310525   71529 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 19:04:24.310531   71529 kubeadm.go:310] 
	I0910 19:04:24.310598   71529 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310705   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 \
	I0910 19:04:24.310728   71529 kubeadm.go:310] 	--control-plane 
	I0910 19:04:24.310731   71529 kubeadm.go:310] 
	I0910 19:04:24.310806   71529 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 19:04:24.310814   71529 kubeadm.go:310] 
	I0910 19:04:24.310884   71529 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3uw8fx.h3bliquui6tuj5mh \
	I0910 19:04:24.310978   71529 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fc70e53ba918af9d9fcd2ba88562ad90fa3d8760ed62f7dde0288c2408e21580 
	I0910 19:04:24.310994   71529 cni.go:84] Creating CNI manager for ""
	I0910 19:04:24.311006   71529 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 19:04:24.312411   71529 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 19:04:24.313516   71529 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 19:04:24.326066   71529 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 19:04:24.346367   71529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 19:04:24.346446   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.346475   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-347802 minikube.k8s.io/updated_at=2024_09_10T19_04_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=no-preload-347802 minikube.k8s.io/primary=true
	I0910 19:04:24.374396   71529 ops.go:34] apiserver oom_adj: -16
	I0910 19:04:24.561164   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.061938   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:25.561435   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.062175   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:26.561899   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:27.061256   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:24.664345   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:26.666316   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:27.561862   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.061889   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.562200   71529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 19:04:28.732352   71529 kubeadm.go:1113] duration metric: took 4.385961888s to wait for elevateKubeSystemPrivileges
	I0910 19:04:28.732387   71529 kubeadm.go:394] duration metric: took 5m2.035769941s to StartCluster
	I0910 19:04:28.732410   71529 settings.go:142] acquiring lock: {Name:mk90a78de2ca408ec80f8706d87da32b6f2e6439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.732497   71529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 19:04:28.735625   71529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/kubeconfig: {Name:mk27efba1d519e10070b067e86a0ee8746afd2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 19:04:28.735909   71529 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0910 19:04:28.736234   71529 config.go:182] Loaded profile config "no-preload-347802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 19:04:28.736296   71529 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0910 19:04:28.736417   71529 addons.go:69] Setting storage-provisioner=true in profile "no-preload-347802"
	I0910 19:04:28.736445   71529 addons.go:234] Setting addon storage-provisioner=true in "no-preload-347802"
	W0910 19:04:28.736453   71529 addons.go:243] addon storage-provisioner should already be in state true
	I0910 19:04:28.736480   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.736667   71529 addons.go:69] Setting default-storageclass=true in profile "no-preload-347802"
	I0910 19:04:28.736674   71529 addons.go:69] Setting metrics-server=true in profile "no-preload-347802"
	I0910 19:04:28.736703   71529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-347802"
	I0910 19:04:28.736717   71529 addons.go:234] Setting addon metrics-server=true in "no-preload-347802"
	W0910 19:04:28.736727   71529 addons.go:243] addon metrics-server should already be in state true
	I0910 19:04:28.736758   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.737346   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737360   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737401   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737709   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737809   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.737832   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.737891   71529 out.go:177] * Verifying Kubernetes components...
	I0910 19:04:28.739122   71529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 19:04:28.755720   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0910 19:04:28.755754   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0910 19:04:28.756110   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0910 19:04:28.756297   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756298   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756688   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.756870   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.756891   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757053   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757092   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.757426   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.757451   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.757637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.757759   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.758328   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.758368   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.760809   71529 addons.go:234] Setting addon default-storageclass=true in "no-preload-347802"
	W0910 19:04:28.760825   71529 addons.go:243] addon default-storageclass should already be in state true
	I0910 19:04:28.760848   71529 host.go:66] Checking if "no-preload-347802" exists ...
	I0910 19:04:28.761254   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.761285   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.761486   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.761994   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.762024   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.775766   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0910 19:04:28.776199   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.776801   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.776824   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.777167   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.777359   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.777651   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0910 19:04:28.778091   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.778678   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.778696   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.779019   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.779215   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.779616   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.780231   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0910 19:04:28.780605   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.780675   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.781156   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.781183   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.781330   71529 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 19:04:28.781416   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.781810   71529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 19:04:28.781841   71529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 19:04:28.782326   71529 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0910 19:04:28.782391   71529 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:28.782408   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 19:04:28.782425   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.783393   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 19:04:28.783413   71529 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 19:04:28.783433   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.785287   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785763   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.785792   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.785948   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.786114   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.786250   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.786397   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.786768   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787101   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.787124   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.787330   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.787492   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.787637   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.787747   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.802599   71529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0910 19:04:28.802947   71529 main.go:141] libmachine: () Calling .GetVersion
	I0910 19:04:28.803402   71529 main.go:141] libmachine: Using API Version  1
	I0910 19:04:28.803415   71529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 19:04:28.803711   71529 main.go:141] libmachine: () Calling .GetMachineName
	I0910 19:04:28.803882   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetState
	I0910 19:04:28.805296   71529 main.go:141] libmachine: (no-preload-347802) Calling .DriverName
	I0910 19:04:28.805498   71529 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:28.805510   71529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 19:04:28.805523   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHHostname
	I0910 19:04:28.808615   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809041   71529 main.go:141] libmachine: (no-preload-347802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:b1:44", ip: ""} in network mk-no-preload-347802: {Iface:virbr3 ExpiryTime:2024-09-10 19:59:00 +0000 UTC Type:0 Mac:52:54:00:5b:b1:44 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-347802 Clientid:01:52:54:00:5b:b1:44}
	I0910 19:04:28.809056   71529 main.go:141] libmachine: (no-preload-347802) DBG | domain no-preload-347802 has defined IP address 192.168.50.138 and MAC address 52:54:00:5b:b1:44 in network mk-no-preload-347802
	I0910 19:04:28.809333   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHPort
	I0910 19:04:28.809518   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHKeyPath
	I0910 19:04:28.809687   71529 main.go:141] libmachine: (no-preload-347802) Calling .GetSSHUsername
	I0910 19:04:28.809792   71529 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/no-preload-347802/id_rsa Username:docker}
	I0910 19:04:28.974399   71529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 19:04:29.068531   71529 node_ready.go:35] waiting up to 6m0s for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084281   71529 node_ready.go:49] node "no-preload-347802" has status "Ready":"True"
	I0910 19:04:29.084306   71529 node_ready.go:38] duration metric: took 15.737646ms for node "no-preload-347802" to be "Ready" ...
	I0910 19:04:29.084317   71529 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:29.098794   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:29.122272   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 19:04:29.132813   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 19:04:29.191758   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 19:04:29.191777   71529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0910 19:04:29.224998   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 19:04:29.225019   71529 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 19:04:29.264455   71529 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:29.264489   71529 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 19:04:29.369504   71529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 19:04:30.199702   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.066859027s)
	I0910 19:04:30.199757   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199769   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.199850   71529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077541595s)
	I0910 19:04:30.199895   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.199909   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200096   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200135   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200147   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200155   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200154   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200174   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200187   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200201   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200209   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.200220   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.200387   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200402   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.200617   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.200655   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.200680   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.219416   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.219437   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.219697   71529 main.go:141] libmachine: (no-preload-347802) DBG | Closing plugin on server side
	I0910 19:04:30.219705   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.219741   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.366927   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.366957   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367264   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367279   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367288   71529 main.go:141] libmachine: Making call to close driver server
	I0910 19:04:30.367302   71529 main.go:141] libmachine: (no-preload-347802) Calling .Close
	I0910 19:04:30.367506   71529 main.go:141] libmachine: Successfully made call to close driver server
	I0910 19:04:30.367520   71529 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 19:04:30.367533   71529 addons.go:475] Verifying addon metrics-server=true in "no-preload-347802"
	I0910 19:04:30.369968   71529 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0910 19:04:30.371186   71529 addons.go:510] duration metric: took 1.634894777s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0910 19:04:31.104506   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:29.164993   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:31.668683   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:33.105761   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:35.606200   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:34.164783   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:36.663840   71183 pod_ready.go:103] pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:38.106188   71529 pod_ready.go:103] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"False"
	I0910 19:04:39.106175   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.106199   71529 pod_ready.go:82] duration metric: took 10.007378894s for pod "coredns-6f6b679f8f-bsp9f" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.106210   71529 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111333   71529 pod_ready.go:93] pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.111352   71529 pod_ready.go:82] duration metric: took 5.13344ms for pod "coredns-6f6b679f8f-hlbrz" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.111362   71529 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116673   71529 pod_ready.go:93] pod "etcd-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.116689   71529 pod_ready.go:82] duration metric: took 5.319986ms for pod "etcd-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.116697   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125400   71529 pod_ready.go:93] pod "kube-apiserver-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.125422   71529 pod_ready.go:82] duration metric: took 8.717835ms for pod "kube-apiserver-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.125433   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133790   71529 pod_ready.go:93] pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.133807   71529 pod_ready.go:82] duration metric: took 8.36626ms for pod "kube-controller-manager-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.133818   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504642   71529 pod_ready.go:93] pod "kube-proxy-gwzhs" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.504665   71529 pod_ready.go:82] duration metric: took 370.840119ms for pod "kube-proxy-gwzhs" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.504675   71529 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903625   71529 pod_ready.go:93] pod "kube-scheduler-no-preload-347802" in "kube-system" namespace has status "Ready":"True"
	I0910 19:04:39.903646   71529 pod_ready.go:82] duration metric: took 398.964651ms for pod "kube-scheduler-no-preload-347802" in "kube-system" namespace to be "Ready" ...
	I0910 19:04:39.903653   71529 pod_ready.go:39] duration metric: took 10.819325885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:39.903666   71529 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:39.903710   71529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:39.918479   71529 api_server.go:72] duration metric: took 11.182520681s to wait for apiserver process to appear ...
	I0910 19:04:39.918501   71529 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:39.918521   71529 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0910 19:04:39.922745   71529 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0910 19:04:39.923681   71529 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:39.923701   71529 api_server.go:131] duration metric: took 5.193102ms to wait for apiserver health ...
	I0910 19:04:39.923710   71529 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:40.106587   71529 system_pods.go:59] 9 kube-system pods found
	I0910 19:04:40.106614   71529 system_pods.go:61] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.106619   71529 system_pods.go:61] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.106623   71529 system_pods.go:61] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.106626   71529 system_pods.go:61] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.106630   71529 system_pods.go:61] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.106633   71529 system_pods.go:61] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.106637   71529 system_pods.go:61] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.106642   71529 system_pods.go:61] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.106646   71529 system_pods.go:61] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.106652   71529 system_pods.go:74] duration metric: took 182.93737ms to wait for pod list to return data ...
	I0910 19:04:40.106662   71529 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:40.303294   71529 default_sa.go:45] found service account: "default"
	I0910 19:04:40.303316   71529 default_sa.go:55] duration metric: took 196.649242ms for default service account to be created ...
	I0910 19:04:40.303324   71529 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:40.506862   71529 system_pods.go:86] 9 kube-system pods found
	I0910 19:04:40.506894   71529 system_pods.go:89] "coredns-6f6b679f8f-bsp9f" [53cd67b5-b542-4b40-adf9-3aba78407735] Running
	I0910 19:04:40.506902   71529 system_pods.go:89] "coredns-6f6b679f8f-hlbrz" [1e66ea46-d3ad-44e4-b9fc-c7ea5c44ac15] Running
	I0910 19:04:40.506908   71529 system_pods.go:89] "etcd-no-preload-347802" [8fcf8fce-881a-44d8-8763-2a02c848f39e] Running
	I0910 19:04:40.506913   71529 system_pods.go:89] "kube-apiserver-no-preload-347802" [372d6d5b-bf44-4edf-bd25-7b75f5966d9e] Running
	I0910 19:04:40.506919   71529 system_pods.go:89] "kube-controller-manager-no-preload-347802" [6ab95db4-18a0-48b6-ae4c-b08ccdeaec01] Running
	I0910 19:04:40.506925   71529 system_pods.go:89] "kube-proxy-gwzhs" [f03fe8e3-bee7-4805-a1e9-83494f33105c] Running
	I0910 19:04:40.506931   71529 system_pods.go:89] "kube-scheduler-no-preload-347802" [5a130f4f-7577-4e25-ac66-b9181733e667] Running
	I0910 19:04:40.506940   71529 system_pods.go:89] "metrics-server-6867b74b74-cz4tz" [22d16ca9-922b-40d8-97d1-47a44ba70aa3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:40.506949   71529 system_pods.go:89] "storage-provisioner" [cd77229d-0209-459f-ac5e-96317c425f60] Running
	I0910 19:04:40.506963   71529 system_pods.go:126] duration metric: took 203.633111ms to wait for k8s-apps to be running ...
	I0910 19:04:40.506974   71529 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:40.507032   71529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:40.522711   71529 system_svc.go:56] duration metric: took 15.728044ms WaitForService to wait for kubelet
	I0910 19:04:40.522739   71529 kubeadm.go:582] duration metric: took 11.786784927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:40.522761   71529 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:40.702993   71529 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:40.703011   71529 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:40.703020   71529 node_conditions.go:105] duration metric: took 180.253729ms to run NodePressure ...
	I0910 19:04:40.703031   71529 start.go:241] waiting for startup goroutines ...
	I0910 19:04:40.703037   71529 start.go:246] waiting for cluster config update ...
	I0910 19:04:40.703046   71529 start.go:255] writing updated cluster config ...
	I0910 19:04:40.703329   71529 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:40.750434   71529 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:40.752453   71529 out.go:177] * Done! kubectl is now configured to use "no-preload-347802" cluster and "default" namespace by default
	I0910 19:04:37.670616   71183 pod_ready.go:82] duration metric: took 4m0.012645309s for pod "metrics-server-6867b74b74-26knw" in "kube-system" namespace to be "Ready" ...
	E0910 19:04:37.670637   71183 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0910 19:04:37.670644   71183 pod_ready.go:39] duration metric: took 4m3.614436373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 19:04:37.670658   71183 api_server.go:52] waiting for apiserver process to appear ...
	I0910 19:04:37.670693   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:37.670746   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:37.721269   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:37.721295   71183 cri.go:89] found id: ""
	I0910 19:04:37.721303   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:37.721361   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.725648   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:37.725711   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:37.760937   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:37.760967   71183 cri.go:89] found id: ""
	I0910 19:04:37.760978   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:37.761034   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.765181   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:37.765243   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:37.800419   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:37.800447   71183 cri.go:89] found id: ""
	I0910 19:04:37.800457   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:37.800509   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.805255   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:37.805330   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:37.849032   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:37.849055   71183 cri.go:89] found id: ""
	I0910 19:04:37.849064   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:37.849136   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.853148   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:37.853224   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:37.888327   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:37.888352   71183 cri.go:89] found id: ""
	I0910 19:04:37.888361   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:37.888417   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.892721   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:37.892782   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:37.928648   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:37.928671   71183 cri.go:89] found id: ""
	I0910 19:04:37.928679   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:37.928731   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:37.932746   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:37.932804   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:37.967343   71183 cri.go:89] found id: ""
	I0910 19:04:37.967372   71183 logs.go:276] 0 containers: []
	W0910 19:04:37.967382   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:37.967387   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:37.967435   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:38.004150   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:38.004173   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:38.004176   71183 cri.go:89] found id: ""
	I0910 19:04:38.004183   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:38.004227   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.008118   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:38.011779   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:38.011799   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:38.026386   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:38.026405   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:38.149296   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:38.149324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:38.200987   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:38.201019   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:38.243953   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:38.243983   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:38.287242   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:38.287272   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:38.329165   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:38.329193   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:38.391117   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:38.391144   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:38.464906   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:38.464944   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:38.979681   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:38.979732   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:39.015604   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:39.015636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:39.055715   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:39.055748   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:39.103920   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:39.103952   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.650354   71183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 19:04:41.667568   71183 api_server.go:72] duration metric: took 4m15.330735169s to wait for apiserver process to appear ...
	I0910 19:04:41.667604   71183 api_server.go:88] waiting for apiserver healthz status ...
	I0910 19:04:41.667636   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:41.667682   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:41.707476   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:41.707507   71183 cri.go:89] found id: ""
	I0910 19:04:41.707520   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:41.707590   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.711732   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:41.711794   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:41.745943   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:41.745963   71183 cri.go:89] found id: ""
	I0910 19:04:41.745972   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:41.746023   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.749930   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:41.749978   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:41.790296   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:41.790318   71183 cri.go:89] found id: ""
	I0910 19:04:41.790327   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:41.790388   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.794933   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:41.794988   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:41.840669   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:41.840695   71183 cri.go:89] found id: ""
	I0910 19:04:41.840704   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:41.840762   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.845674   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:41.845729   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:41.891686   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:41.891708   71183 cri.go:89] found id: ""
	I0910 19:04:41.891717   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:41.891774   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.896435   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:41.896486   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:41.935802   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:41.935829   71183 cri.go:89] found id: ""
	I0910 19:04:41.935838   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:41.935882   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:41.940924   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:41.940979   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:41.980326   71183 cri.go:89] found id: ""
	I0910 19:04:41.980349   71183 logs.go:276] 0 containers: []
	W0910 19:04:41.980357   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:41.980362   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:41.980409   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:42.021683   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.021701   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.021704   71183 cri.go:89] found id: ""
	I0910 19:04:42.021711   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:42.021760   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.025986   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:42.029896   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:42.029919   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:42.101147   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:42.101182   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:42.115299   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:42.115324   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:42.230472   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:42.230503   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:42.285314   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:42.285341   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:42.338243   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:42.338283   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:42.380609   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:42.380636   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:42.424255   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:42.424290   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:42.481943   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:42.481972   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:42.525590   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:42.525613   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:42.566519   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:42.566546   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:42.601221   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:42.601256   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:43.021780   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:43.021816   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:45.569149   71183 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0910 19:04:45.575146   71183 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0910 19:04:45.576058   71183 api_server.go:141] control plane version: v1.31.0
	I0910 19:04:45.576077   71183 api_server.go:131] duration metric: took 3.908465286s to wait for apiserver health ...
	I0910 19:04:45.576088   71183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 19:04:45.576112   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:04:45.576159   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:04:45.631224   71183 cri.go:89] found id: "b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:45.631246   71183 cri.go:89] found id: ""
	I0910 19:04:45.631254   71183 logs.go:276] 1 containers: [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293]
	I0910 19:04:45.631310   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.636343   71183 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:04:45.636408   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:04:45.675538   71183 cri.go:89] found id: "4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:45.675558   71183 cri.go:89] found id: ""
	I0910 19:04:45.675565   71183 logs.go:276] 1 containers: [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34]
	I0910 19:04:45.675620   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.679865   71183 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:04:45.679921   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:04:45.724808   71183 cri.go:89] found id: "6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:45.724835   71183 cri.go:89] found id: ""
	I0910 19:04:45.724844   71183 logs.go:276] 1 containers: [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313]
	I0910 19:04:45.724898   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.729083   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:04:45.729141   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:04:45.762943   71183 cri.go:89] found id: "6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:45.762965   71183 cri.go:89] found id: ""
	I0910 19:04:45.762973   71183 logs.go:276] 1 containers: [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc]
	I0910 19:04:45.763022   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.766889   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:04:45.766935   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:04:45.802849   71183 cri.go:89] found id: "f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:45.802875   71183 cri.go:89] found id: ""
	I0910 19:04:45.802883   71183 logs.go:276] 1 containers: [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e]
	I0910 19:04:45.802924   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.806796   71183 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:04:45.806860   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:04:45.841656   71183 cri.go:89] found id: "2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:45.841675   71183 cri.go:89] found id: ""
	I0910 19:04:45.841682   71183 logs.go:276] 1 containers: [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3]
	I0910 19:04:45.841722   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.846078   71183 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:04:45.846145   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:04:45.883750   71183 cri.go:89] found id: ""
	I0910 19:04:45.883773   71183 logs.go:276] 0 containers: []
	W0910 19:04:45.883787   71183 logs.go:278] No container was found matching "kindnet"
	I0910 19:04:45.883795   71183 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0910 19:04:45.883857   71183 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0910 19:04:45.918786   71183 cri.go:89] found id: "11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:45.918815   71183 cri.go:89] found id: "2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.918822   71183 cri.go:89] found id: ""
	I0910 19:04:45.918829   71183 logs.go:276] 2 containers: [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f]
	I0910 19:04:45.918876   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.923329   71183 ssh_runner.go:195] Run: which crictl
	I0910 19:04:45.927395   71183 logs.go:123] Gathering logs for storage-provisioner [2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f] ...
	I0910 19:04:45.927417   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2986c781976029cf7a679f64e06cc1a05dead55a1ff492abc5154876a7c5900f"
	I0910 19:04:45.963527   71183 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:04:45.963557   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:04:46.364843   71183 logs.go:123] Gathering logs for dmesg ...
	I0910 19:04:46.364886   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0910 19:04:46.379339   71183 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:04:46.379366   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0910 19:04:46.483159   71183 logs.go:123] Gathering logs for kube-scheduler [6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc] ...
	I0910 19:04:46.483190   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a3fc78649970c9221a3daafc2fe169c0d74d03130b155e37fcb0235117ebfbc"
	I0910 19:04:46.523850   71183 logs.go:123] Gathering logs for kube-controller-manager [2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3] ...
	I0910 19:04:46.523877   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2582ec871deb8a9a78225dc75540e31d5a2baf2fb8b1c58543187100a16343e3"
	I0910 19:04:46.574864   71183 logs.go:123] Gathering logs for kube-proxy [f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e] ...
	I0910 19:04:46.574905   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f113a6d74aef2b1110e0a7e05ef7acd7a7f3efdf0e466eb27c4417a8cf11102e"
	I0910 19:04:46.613765   71183 logs.go:123] Gathering logs for storage-provisioner [11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0] ...
	I0910 19:04:46.613793   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c23ffac9396cc98a9e4cfe946063ef17d1966f37ae5029aee132d1982823c0"
	I0910 19:04:46.659791   71183 logs.go:123] Gathering logs for container status ...
	I0910 19:04:46.659819   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:04:46.722103   71183 logs.go:123] Gathering logs for kubelet ...
	I0910 19:04:46.722138   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:04:46.794098   71183 logs.go:123] Gathering logs for kube-apiserver [b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293] ...
	I0910 19:04:46.794140   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9ad0bbb3de479a955fe601296b580d0c2c860bf641f69c2932e13e803f88293"
	I0910 19:04:46.850112   71183 logs.go:123] Gathering logs for etcd [4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34] ...
	I0910 19:04:46.850148   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0241a4c8a31e05802a02ea077272608c68eba504d70f5003705bd39970be34"
	I0910 19:04:46.899733   71183 logs.go:123] Gathering logs for coredns [6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313] ...
	I0910 19:04:46.899770   71183 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ba324381f8f857b0a041b45f81c238920c734810772a2f08ced29650c770313"
	I0910 19:04:44.413134   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:04:44.413215   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:44.413400   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:49.448164   71183 system_pods.go:59] 8 kube-system pods found
	I0910 19:04:49.448194   71183 system_pods.go:61] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.448201   71183 system_pods.go:61] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.448207   71183 system_pods.go:61] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.448216   71183 system_pods.go:61] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.448220   71183 system_pods.go:61] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.448225   71183 system_pods.go:61] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.448232   71183 system_pods.go:61] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.448239   71183 system_pods.go:61] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.448248   71183 system_pods.go:74] duration metric: took 3.872154051s to wait for pod list to return data ...
	I0910 19:04:49.448255   71183 default_sa.go:34] waiting for default service account to be created ...
	I0910 19:04:49.450795   71183 default_sa.go:45] found service account: "default"
	I0910 19:04:49.450816   71183 default_sa.go:55] duration metric: took 2.553358ms for default service account to be created ...
	I0910 19:04:49.450826   71183 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 19:04:49.454993   71183 system_pods.go:86] 8 kube-system pods found
	I0910 19:04:49.455015   71183 system_pods.go:89] "coredns-6f6b679f8f-mt78p" [b4bbfe99-3c36-4095-b7e8-ee0861f9973f] Running
	I0910 19:04:49.455020   71183 system_pods.go:89] "etcd-embed-certs-836868" [0515a8db-e1b9-41d9-a69e-e49fbd6af70f] Running
	I0910 19:04:49.455024   71183 system_pods.go:89] "kube-apiserver-embed-certs-836868" [f6771940-518b-4ae6-93a6-8ffd2e08a6ff] Running
	I0910 19:04:49.455030   71183 system_pods.go:89] "kube-controller-manager-embed-certs-836868" [07f6b11e-478b-4501-b615-8906877c7cbe] Running
	I0910 19:04:49.455033   71183 system_pods.go:89] "kube-proxy-4fddv" [13f0b1df-26eb-4a6c-957d-0b7655309cb9] Running
	I0910 19:04:49.455038   71183 system_pods.go:89] "kube-scheduler-embed-certs-836868" [a16bf2b1-3fe3-487b-92f7-f71f693b83dd] Running
	I0910 19:04:49.455047   71183 system_pods.go:89] "metrics-server-6867b74b74-26knw" [fdf89bfa-f2b6-4dc4-9279-ed75c1256494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 19:04:49.455053   71183 system_pods.go:89] "storage-provisioner" [47ed78c5-1cce-4d50-a023-5c356f331035] Running
	I0910 19:04:49.455062   71183 system_pods.go:126] duration metric: took 4.230457ms to wait for k8s-apps to be running ...
	I0910 19:04:49.455073   71183 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 19:04:49.455130   71183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:04:49.471265   71183 system_svc.go:56] duration metric: took 16.184718ms WaitForService to wait for kubelet
	I0910 19:04:49.471293   71183 kubeadm.go:582] duration metric: took 4m23.134472506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 19:04:49.471320   71183 node_conditions.go:102] verifying NodePressure condition ...
	I0910 19:04:49.475529   71183 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 19:04:49.475548   71183 node_conditions.go:123] node cpu capacity is 2
	I0910 19:04:49.475558   71183 node_conditions.go:105] duration metric: took 4.228611ms to run NodePressure ...
	I0910 19:04:49.475567   71183 start.go:241] waiting for startup goroutines ...
	I0910 19:04:49.475577   71183 start.go:246] waiting for cluster config update ...
	I0910 19:04:49.475589   71183 start.go:255] writing updated cluster config ...
	I0910 19:04:49.475827   71183 ssh_runner.go:195] Run: rm -f paused
	I0910 19:04:49.522354   71183 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 19:04:49.524738   71183 out.go:177] * Done! kubectl is now configured to use "embed-certs-836868" cluster and "default" namespace by default
	I0910 19:04:49.413796   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:49.413967   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:04:59.414341   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:04:59.414514   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:19.415680   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:19.415950   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.417770   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:05:59.418015   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:05:59.418035   72122 kubeadm.go:310] 
	I0910 19:05:59.418101   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:05:59.418137   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:05:59.418143   72122 kubeadm.go:310] 
	I0910 19:05:59.418178   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:05:59.418207   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:05:59.418313   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:05:59.418321   72122 kubeadm.go:310] 
	I0910 19:05:59.418443   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:05:59.418477   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:05:59.418519   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:05:59.418527   72122 kubeadm.go:310] 
	I0910 19:05:59.418625   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:05:59.418723   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:05:59.418731   72122 kubeadm.go:310] 
	I0910 19:05:59.418869   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:05:59.418976   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:05:59.419045   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:05:59.419141   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:05:59.419152   72122 kubeadm.go:310] 
	I0910 19:05:59.420015   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:05:59.420093   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:05:59.420165   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0910 19:05:59.420289   72122 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0910 19:05:59.420339   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0910 19:06:04.848652   72122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.428289133s)
	I0910 19:06:04.848719   72122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 19:06:04.862914   72122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 19:06:04.872903   72122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 19:06:04.872920   72122 kubeadm.go:157] found existing configuration files:
	
	I0910 19:06:04.872960   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 19:06:04.882109   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 19:06:04.882168   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 19:06:04.890962   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 19:06:04.899925   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 19:06:04.899985   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 19:06:04.908796   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.917123   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 19:06:04.917173   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 19:06:04.925821   72122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 19:06:04.937885   72122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 19:06:04.937963   72122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 19:06:04.948108   72122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 19:06:05.019246   72122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0910 19:06:05.019321   72122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 19:06:05.162639   72122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 19:06:05.162770   72122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 19:06:05.162918   72122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 19:06:05.343270   72122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 19:06:05.345092   72122 out.go:235]   - Generating certificates and keys ...
	I0910 19:06:05.345189   72122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 19:06:05.345299   72122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 19:06:05.345417   72122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0910 19:06:05.345497   72122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0910 19:06:05.345606   72122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0910 19:06:05.345718   72122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0910 19:06:05.345981   72122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0910 19:06:05.346367   72122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0910 19:06:05.346822   72122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0910 19:06:05.347133   72122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0910 19:06:05.347246   72122 kubeadm.go:310] [certs] Using the existing "sa" key
	I0910 19:06:05.347346   72122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 19:06:05.536681   72122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 19:06:05.773929   72122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 19:06:05.994857   72122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 19:06:06.139145   72122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 19:06:06.154510   72122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 19:06:06.155479   72122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 19:06:06.155548   72122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 19:06:06.311520   72122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 19:06:06.314167   72122 out.go:235]   - Booting up control plane ...
	I0910 19:06:06.314311   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 19:06:06.320856   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 19:06:06.321801   72122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 19:06:06.322508   72122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 19:06:06.324744   72122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 19:06:46.327168   72122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0910 19:06:46.327286   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:46.327534   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:06:51.328423   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:06:51.328643   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:01.329028   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:01.329315   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:07:21.329371   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:07:21.329627   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328238   72122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0910 19:08:01.328535   72122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0910 19:08:01.328566   72122 kubeadm.go:310] 
	I0910 19:08:01.328625   72122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0910 19:08:01.328688   72122 kubeadm.go:310] 		timed out waiting for the condition
	I0910 19:08:01.328701   72122 kubeadm.go:310] 
	I0910 19:08:01.328749   72122 kubeadm.go:310] 	This error is likely caused by:
	I0910 19:08:01.328797   72122 kubeadm.go:310] 		- The kubelet is not running
	I0910 19:08:01.328941   72122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0910 19:08:01.328953   72122 kubeadm.go:310] 
	I0910 19:08:01.329068   72122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0910 19:08:01.329136   72122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0910 19:08:01.329177   72122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0910 19:08:01.329191   72122 kubeadm.go:310] 
	I0910 19:08:01.329310   72122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0910 19:08:01.329377   72122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0910 19:08:01.329383   72122 kubeadm.go:310] 
	I0910 19:08:01.329468   72122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0910 19:08:01.329539   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0910 19:08:01.329607   72122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0910 19:08:01.329667   72122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0910 19:08:01.329674   72122 kubeadm.go:310] 
	I0910 19:08:01.330783   72122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 19:08:01.330892   72122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0910 19:08:01.330963   72122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0910 19:08:01.331020   72122 kubeadm.go:394] duration metric: took 8m1.874926868s to StartCluster
	I0910 19:08:01.331061   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0910 19:08:01.331117   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0910 19:08:01.385468   72122 cri.go:89] found id: ""
	I0910 19:08:01.385492   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.385499   72122 logs.go:278] No container was found matching "kube-apiserver"
	I0910 19:08:01.385505   72122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0910 19:08:01.385571   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0910 19:08:01.424028   72122 cri.go:89] found id: ""
	I0910 19:08:01.424051   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.424060   72122 logs.go:278] No container was found matching "etcd"
	I0910 19:08:01.424064   72122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0910 19:08:01.424121   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0910 19:08:01.462946   72122 cri.go:89] found id: ""
	I0910 19:08:01.462973   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.462983   72122 logs.go:278] No container was found matching "coredns"
	I0910 19:08:01.462991   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0910 19:08:01.463045   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0910 19:08:01.498242   72122 cri.go:89] found id: ""
	I0910 19:08:01.498269   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.498278   72122 logs.go:278] No container was found matching "kube-scheduler"
	I0910 19:08:01.498283   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0910 19:08:01.498329   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0910 19:08:01.532917   72122 cri.go:89] found id: ""
	I0910 19:08:01.532946   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.532953   72122 logs.go:278] No container was found matching "kube-proxy"
	I0910 19:08:01.532959   72122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0910 19:08:01.533011   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0910 19:08:01.567935   72122 cri.go:89] found id: ""
	I0910 19:08:01.567959   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.567967   72122 logs.go:278] No container was found matching "kube-controller-manager"
	I0910 19:08:01.567973   72122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0910 19:08:01.568027   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0910 19:08:01.601393   72122 cri.go:89] found id: ""
	I0910 19:08:01.601418   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.601426   72122 logs.go:278] No container was found matching "kindnet"
	I0910 19:08:01.601432   72122 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0910 19:08:01.601489   72122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0910 19:08:01.639307   72122 cri.go:89] found id: ""
	I0910 19:08:01.639335   72122 logs.go:276] 0 containers: []
	W0910 19:08:01.639345   72122 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0910 19:08:01.639358   72122 logs.go:123] Gathering logs for describe nodes ...
	I0910 19:08:01.639373   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0910 19:08:01.726566   72122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0910 19:08:01.726591   72122 logs.go:123] Gathering logs for CRI-O ...
	I0910 19:08:01.726614   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0910 19:08:01.839965   72122 logs.go:123] Gathering logs for container status ...
	I0910 19:08:01.840004   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0910 19:08:01.879658   72122 logs.go:123] Gathering logs for kubelet ...
	I0910 19:08:01.879687   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0910 19:08:01.939066   72122 logs.go:123] Gathering logs for dmesg ...
	I0910 19:08:01.939102   72122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0910 19:08:01.955390   72122 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0910 19:08:01.955436   72122 out.go:270] * 
	W0910 19:08:01.955500   72122 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.955524   72122 out.go:270] * 
	W0910 19:08:01.956343   72122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 19:08:01.959608   72122 out.go:201] 
	W0910 19:08:01.960877   72122 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0910 19:08:01.960929   72122 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0910 19:08:01.960957   72122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0910 19:08:01.962345   72122 out.go:201] 
	
	
	==> CRI-O <==
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.229843087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995954229814601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=093d4267-c45b-4131-964e-915025154a84 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.230334906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c5fdf99-7d81-4a14-bfa0-c7f52c12103f name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.230386509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c5fdf99-7d81-4a14-bfa0-c7f52c12103f name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.230420210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0c5fdf99-7d81-4a14-bfa0-c7f52c12103f name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.260626513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2c6be65-51d9-4ba0-919d-bf8990cb2a1a name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.260692604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2c6be65-51d9-4ba0-919d-bf8990cb2a1a name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.261634266Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f397f06a-a775-4239-8f16-a6d29ae5f44b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.262017397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995954261993921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f397f06a-a775-4239-8f16-a6d29ae5f44b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.262470973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33bf7040-5c18-4507-84b4-70ca85a1770e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.262517242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33bf7040-5c18-4507-84b4-70ca85a1770e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.262558898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33bf7040-5c18-4507-84b4-70ca85a1770e name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.296895607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6489eac7-e54f-4788-a84a-c577e5f9d9d8 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.296971852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6489eac7-e54f-4788-a84a-c577e5f9d9d8 name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.298191079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be78c694-842e-4867-88d4-f83aecc24908 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.298589466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995954298569689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be78c694-842e-4867-88d4-f83aecc24908 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.299109280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a1a83c0-afd0-4d82-a1d9-e26a3d438b48 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.299242194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a1a83c0-afd0-4d82-a1d9-e26a3d438b48 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.299303878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5a1a83c0-afd0-4d82-a1d9-e26a3d438b48 name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.333432892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd62e891-3c1f-46ae-a3d6-4674cf1a83db name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.333522819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd62e891-3c1f-46ae-a3d6-4674cf1a83db name=/runtime.v1.RuntimeService/Version
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.334445662Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51fcfb3f-5d4b-46fa-bd0b-144527187d17 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.334851242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725995954334830603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51fcfb3f-5d4b-46fa-bd0b-144527187d17 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.335436811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5590881a-40aa-4012-9a0d-74f9af7efa1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.335510030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5590881a-40aa-4012-9a0d-74f9af7efa1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 10 19:19:14 old-k8s-version-432422 crio[642]: time="2024-09-10 19:19:14.335544500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5590881a-40aa-4012-9a0d-74f9af7efa1a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep10 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058119] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044186] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.255058] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.413650] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.079518] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.057884] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065532] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.191553] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.154429] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.265022] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +6.430445] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.070012] kauditd_printk_skb: 130 callbacks suppressed
	[Sep10 19:00] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
	[ +11.862536] kauditd_printk_skb: 46 callbacks suppressed
	[Sep10 19:04] systemd-fstab-generator[5076]: Ignoring "noauto" option for root device
	[Sep10 19:06] systemd-fstab-generator[5359]: Ignoring "noauto" option for root device
	[  +0.066075] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:19:14 up 19 min,  0 users,  load average: 0.08, 0.03, 0.04
	Linux old-k8s-version-432422 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /usr/local/go/src/errors/wrap.go:95 +0x253
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/api/errors.ReasonForError(0x4f04d00, 0xc000a34180, 0x0, 0x0)
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/api/errors/errors.go:650 +0x85
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/api/errors.IsResourceExpired(...)
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/api/errors/errors.go:512
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.isExpiredError(0x4f04d00, 0xc000a34180, 0x0)
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:584 +0x39
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc000265180, 0x4f04d00, 0xc000a34180)
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:128 +0x4d
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000c4a6f0)
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d87ef0, 0x4f0ac20, 0xc000c5c500, 0x1, 0xc00009e0c0)
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000265180, 0xc00009e0c0)
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c1a9b0, 0xc000c07160)
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 10 19:19:14 old-k8s-version-432422 kubelet[6830]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 10 19:19:14 old-k8s-version-432422 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 10 19:19:14 old-k8s-version-432422 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 2 (239.457711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-432422" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (126.97s)

                                                
                                    

Test pass (245/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.87
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 5.09
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.12
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 107.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 132.19
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 12.13
37 TestAddons/parallel/HelmTiller 9.11
39 TestAddons/parallel/CSI 57.73
40 TestAddons/parallel/Headlamp 17.89
41 TestAddons/parallel/CloudSpanner 5.84
42 TestAddons/parallel/LocalPath 53.28
43 TestAddons/parallel/NvidiaDevicePlugin 6.85
44 TestAddons/parallel/Yakd 11.68
45 TestAddons/StoppedEnableDisable 7.54
46 TestCertOptions 114.15
47 TestCertExpiration 307.1
49 TestForceSystemdFlag 46.35
50 TestForceSystemdEnv 77.04
52 TestKVMDriverInstallOrUpdate 1.45
56 TestErrorSpam/setup 45.49
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.71
59 TestErrorSpam/pause 1.53
60 TestErrorSpam/unpause 1.77
61 TestErrorSpam/stop 6.34
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 86.15
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.28
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
73 TestFunctional/serial/CacheCmd/cache/add_local 1.04
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 31.53
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.39
84 TestFunctional/serial/LogsFileCmd 1.41
85 TestFunctional/serial/InvalidService 4.36
87 TestFunctional/parallel/ConfigCmd 0.3
88 TestFunctional/parallel/DashboardCmd 14.85
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.78
95 TestFunctional/parallel/ServiceCmdConnect 15.53
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 34.78
99 TestFunctional/parallel/SSHCmd 0.41
100 TestFunctional/parallel/CpCmd 1.38
101 TestFunctional/parallel/MySQL 21.89
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.3
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
111 TestFunctional/parallel/License 0.16
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.71
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.55
121 TestFunctional/parallel/ImageCommands/Setup 0.41
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.61
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
134 TestFunctional/parallel/MountCmd/any-port 19.91
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.84
137 TestFunctional/parallel/ImageCommands/ImageRemove 2.98
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.32
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
140 TestFunctional/parallel/MountCmd/specific-port 1.92
141 TestFunctional/parallel/ServiceCmd/DeployApp 9.17
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.26
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.27
144 TestFunctional/parallel/ProfileCmd/profile_list 0.25
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
146 TestFunctional/parallel/ServiceCmd/List 1.36
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
149 TestFunctional/parallel/ServiceCmd/Format 0.59
150 TestFunctional/parallel/ServiceCmd/URL 0.37
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 188.35
158 TestMultiControlPlane/serial/DeployApp 5.56
159 TestMultiControlPlane/serial/PingHostFromPods 1.15
160 TestMultiControlPlane/serial/AddWorkerNode 52.6
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
163 TestMultiControlPlane/serial/CopyFile 12.55
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.46
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.77
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.35
172 TestMultiControlPlane/serial/RestartCluster 314.33
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
174 TestMultiControlPlane/serial/AddSecondaryNode 73.57
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 84.48
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.72
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.66
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.38
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 87.25
211 TestMountStart/serial/StartWithMountFirst 26.66
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 27.63
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.67
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.3
218 TestMountStart/serial/RestartStopped 22.21
219 TestMountStart/serial/VerifyMountPostStop 0.35
222 TestMultiNode/serial/FreshStart2Nodes 113.38
223 TestMultiNode/serial/DeployApp2Nodes 4.68
224 TestMultiNode/serial/PingHostFrom2Pods 0.75
225 TestMultiNode/serial/AddNode 47.18
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.2
228 TestMultiNode/serial/CopyFile 6.89
229 TestMultiNode/serial/StopNode 2.28
230 TestMultiNode/serial/StartAfterStop 37.61
232 TestMultiNode/serial/DeleteNode 2.33
234 TestMultiNode/serial/RestartMultiNode 192.5
235 TestMultiNode/serial/ValidateNameConflict 40.4
242 TestScheduledStopUnix 113.88
246 TestRunningBinaryUpgrade 243.69
252 TestStoppedBinaryUpgrade/Setup 0.58
254 TestStoppedBinaryUpgrade/Upgrade 148.45
259 TestNetworkPlugins/group/false 3.21
271 TestPause/serial/Start 98.59
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
275 TestNoKubernetes/serial/StartWithK8s 72.33
277 TestNoKubernetes/serial/StartWithStopK8s 4.88
278 TestNoKubernetes/serial/Start 25.97
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
280 TestNoKubernetes/serial/ProfileList 1.06
281 TestNoKubernetes/serial/Stop 1.3
282 TestNoKubernetes/serial/StartNoArgs 42.77
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
284 TestNetworkPlugins/group/auto/Start 134.33
285 TestNetworkPlugins/group/kindnet/Start 74.4
286 TestNetworkPlugins/group/calico/Start 75.18
287 TestNetworkPlugins/group/auto/KubeletFlags 0.2
288 TestNetworkPlugins/group/auto/NetCatPod 11.21
289 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
290 TestNetworkPlugins/group/auto/DNS 0.16
291 TestNetworkPlugins/group/auto/Localhost 0.12
292 TestNetworkPlugins/group/auto/HairPin 0.14
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
294 TestNetworkPlugins/group/kindnet/NetCatPod 11.27
295 TestNetworkPlugins/group/custom-flannel/Start 70.57
296 TestNetworkPlugins/group/kindnet/DNS 0.15
297 TestNetworkPlugins/group/kindnet/Localhost 0.16
298 TestNetworkPlugins/group/kindnet/HairPin 0.14
299 TestNetworkPlugins/group/enable-default-cni/Start 93.9
300 TestNetworkPlugins/group/calico/ControllerPod 6.01
301 TestNetworkPlugins/group/calico/KubeletFlags 0.23
302 TestNetworkPlugins/group/calico/NetCatPod 14.26
303 TestNetworkPlugins/group/calico/DNS 0.19
304 TestNetworkPlugins/group/calico/Localhost 0.15
305 TestNetworkPlugins/group/calico/HairPin 0.19
306 TestNetworkPlugins/group/flannel/Start 65.05
307 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
308 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
309 TestNetworkPlugins/group/custom-flannel/DNS 0.18
310 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
311 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
312 TestNetworkPlugins/group/bridge/Start 88.82
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
320 TestNetworkPlugins/group/flannel/ControllerPod 6.01
321 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
322 TestNetworkPlugins/group/flannel/NetCatPod 13.27
324 TestStartStop/group/no-preload/serial/FirstStart 119.18
325 TestNetworkPlugins/group/flannel/DNS 0.16
326 TestNetworkPlugins/group/flannel/Localhost 0.13
327 TestNetworkPlugins/group/flannel/HairPin 0.12
329 TestStartStop/group/embed-certs/serial/FirstStart 72.93
330 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
331 TestNetworkPlugins/group/bridge/NetCatPod 10.45
332 TestNetworkPlugins/group/bridge/DNS 0.16
333 TestNetworkPlugins/group/bridge/Localhost 0.14
334 TestNetworkPlugins/group/bridge/HairPin 0.12
336 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.09
337 TestStartStop/group/embed-certs/serial/DeployApp 10.27
338 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.4
340 TestStartStop/group/no-preload/serial/DeployApp 9.27
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
347 TestStartStop/group/embed-certs/serial/SecondStart 637.61
352 TestStartStop/group/no-preload/serial/SecondStart 608.88
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 566.03
354 TestStartStop/group/old-k8s-version/serial/Stop 2.28
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
366 TestStartStop/group/newest-cni/serial/FirstStart 47.28
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
369 TestStartStop/group/newest-cni/serial/Stop 10.39
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
371 TestStartStop/group/newest-cni/serial/SecondStart 36.88
372 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
375 TestStartStop/group/newest-cni/serial/Pause 2.37
x
+
TestDownloadOnly/v1.20.0/json-events (15.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-545922 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-545922 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.869148254s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-545922
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-545922: exit status 85 (54.75941ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-545922 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |          |
	|         | -p download-only-545922        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:00.246414   13133 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:00.246659   13133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:00.246669   13133 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:00.246675   13133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:00.246839   13133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	W0910 17:29:00.246952   13133 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19598-5973/.minikube/config/config.json: open /home/jenkins/minikube-integration/19598-5973/.minikube/config/config.json: no such file or directory
	I0910 17:29:00.247502   13133 out.go:352] Setting JSON to true
	I0910 17:29:00.248350   13133 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":692,"bootTime":1725988648,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:00.248404   13133 start.go:139] virtualization: kvm guest
	I0910 17:29:00.250739   13133 out.go:97] [download-only-545922] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0910 17:29:00.250823   13133 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 17:29:00.250855   13133 notify.go:220] Checking for updates...
	I0910 17:29:00.252269   13133 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:29:00.253679   13133 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:00.254837   13133 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:29:00.256009   13133 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:00.257301   13133 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0910 17:29:00.259496   13133 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 17:29:00.259719   13133 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:00.355796   13133 out.go:97] Using the kvm2 driver based on user configuration
	I0910 17:29:00.355818   13133 start.go:297] selected driver: kvm2
	I0910 17:29:00.355823   13133 start.go:901] validating driver "kvm2" against <nil>
	I0910 17:29:00.356142   13133 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:00.356252   13133 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:29:00.370619   13133 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:29:00.370663   13133 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:00.371126   13133 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0910 17:29:00.371262   13133 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 17:29:00.371323   13133 cni.go:84] Creating CNI manager for ""
	I0910 17:29:00.371336   13133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:29:00.371346   13133 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:29:00.371392   13133 start.go:340] cluster config:
	{Name:download-only-545922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-545922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:00.371548   13133 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:00.373176   13133 out.go:97] Downloading VM boot image ...
	I0910 17:29:00.373209   13133 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19598-5973/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:29:09.377252   13133 out.go:97] Starting "download-only-545922" primary control-plane node in "download-only-545922" cluster
	I0910 17:29:09.377280   13133 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 17:29:09.402281   13133 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:29:09.402323   13133 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:09.402467   13133 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0910 17:29:09.404165   13133 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0910 17:29:09.404177   13133 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0910 17:29:09.437864   13133 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:29:14.694874   13133 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0910 17:29:14.694973   13133 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-545922 host does not exist
	  To start a cluster, run: "minikube start -p download-only-545922"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-545922
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-355146 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-355146 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.09391964s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-355146
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-355146: exit status 85 (55.772551ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-545922 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p download-only-545922        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-545922        | download-only-545922 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | -o=json --download-only        | download-only-355146 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p download-only-355146        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:16.412042   13371 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:16.412149   13371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:16.412155   13371 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:16.412161   13371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:16.412345   13371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:29:16.412886   13371 out.go:352] Setting JSON to true
	I0910 17:29:16.413776   13371 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":708,"bootTime":1725988648,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:16.413830   13371 start.go:139] virtualization: kvm guest
	I0910 17:29:16.415825   13371 out.go:97] [download-only-355146] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:29:16.415951   13371 notify.go:220] Checking for updates...
	I0910 17:29:16.417258   13371 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:29:16.418461   13371 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:16.419745   13371 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:29:16.420847   13371 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:29:16.422046   13371 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0910 17:29:16.424229   13371 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 17:29:16.424414   13371 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:16.455694   13371 out.go:97] Using the kvm2 driver based on user configuration
	I0910 17:29:16.455720   13371 start.go:297] selected driver: kvm2
	I0910 17:29:16.455728   13371 start.go:901] validating driver "kvm2" against <nil>
	I0910 17:29:16.456033   13371 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:16.456118   13371 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:29:16.471199   13371 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:29:16.471263   13371 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:16.471929   13371 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0910 17:29:16.472113   13371 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 17:29:16.472142   13371 cni.go:84] Creating CNI manager for ""
	I0910 17:29:16.472150   13371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0910 17:29:16.472160   13371 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:29:16.472225   13371 start.go:340] cluster config:
	{Name:download-only-355146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-355146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:16.472327   13371 iso.go:125] acquiring lock: {Name:mk310938d9fbdb51215ef930a7daf1e95b891702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:16.473995   13371 out.go:97] Starting "download-only-355146" primary control-plane node in "download-only-355146" cluster
	I0910 17:29:16.474020   13371 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:29:16.503483   13371 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:29:16.503504   13371 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:16.503634   13371 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:29:16.505406   13371 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0910 17:29:16.505430   13371 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0910 17:29:16.530805   13371 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0910 17:29:20.300924   13371 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0910 17:29:20.301014   13371 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19598-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0910 17:29:21.033353   13371 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0910 17:29:21.033692   13371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/download-only-355146/config.json ...
	I0910 17:29:21.033722   13371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/download-only-355146/config.json: {Name:mk54be998a505ce15c0551661479431d60c310e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:21.033894   13371 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0910 17:29:21.034049   13371 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19598-5973/.minikube/cache/linux/amd64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-355146 host does not exist
	  To start a cluster, run: "minikube start -p download-only-355146"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-355146
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-896642 --alsologtostderr --binary-mirror http://127.0.0.1:42249 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-896642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-896642
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (107.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-174877 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-174877 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m46.584996336s)
helpers_test.go:175: Cleaning up "offline-crio-174877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-174877
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-174877: (1.39680042s)
--- PASS: TestOffline (107.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-306463
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-306463: exit status 85 (47.278823ms)

                                                
                                                
-- stdout --
	* Profile "addons-306463" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-306463"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-306463
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-306463: exit status 85 (44.755489ms)

                                                
                                                
-- stdout --
	* Profile "addons-306463" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-306463"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (132.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-306463 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-306463 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m12.188249544s)
--- PASS: TestAddons/Setup (132.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-306463 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-306463 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.13s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vb9xn" [b2af7cc9-0c2a-4a17-aedd-32b9198e8422] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004728765s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-306463
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-306463: (6.122323266s)
--- PASS: TestAddons/parallel/InspektorGadget (12.13s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.11s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.416788ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-4jxbr" [1dfb2d44-f679-47b9-8f2d-4d144742e3a1] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005728717s
addons_test.go:475: (dbg) Run:  kubectl --context addons-306463 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-306463 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.513460045s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.675547ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-306463 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-306463 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [21c3540f-ccb3-45fd-834e-f7a9e231647c] Pending
helpers_test.go:344: "task-pv-pod" [21c3540f-ccb3-45fd-834e-f7a9e231647c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [21c3540f-ccb3-45fd-834e-f7a9e231647c] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005706051s
addons_test.go:590: (dbg) Run:  kubectl --context addons-306463 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-306463 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-306463 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-306463 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-306463 delete pod task-pv-pod: (1.159911049s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-306463 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-306463 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-306463 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6d3f2abb-7b38-482f-8c90-3e95c1332ffd] Pending
helpers_test.go:344: "task-pv-pod-restore" [6d3f2abb-7b38-482f-8c90-3e95c1332ffd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6d3f2abb-7b38-482f-8c90-3e95c1332ffd] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003700533s
addons_test.go:632: (dbg) Run:  kubectl --context addons-306463 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-306463 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-306463 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-306463 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.703988846s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-306463 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-306463 --alsologtostderr -v=1: (1.221185s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-p8hlb" [7eca0917-b3e2-48ec-bdf4-c6d28903e890] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-p8hlb" [7eca0917-b3e2-48ec-bdf4-c6d28903e890] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.012060964s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-306463 addons disable headlamp --alsologtostderr -v=1: (5.654593422s)
--- PASS: TestAddons/parallel/Headlamp (17.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-9mkhq" [b9ca24bc-2998-4f22-943c-ca875f6ed7cb] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01074547s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-306463
--- PASS: TestAddons/parallel/CloudSpanner (5.84s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-306463 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-306463 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-306463 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ee6952e4-9519-41d9-bcd1-f9113da1df63] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ee6952e4-9519-41d9-bcd1-f9113da1df63] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ee6952e4-9519-41d9-bcd1-f9113da1df63] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005433537s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-306463 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 ssh "cat /opt/local-path-provisioner/pvc-2be347cf-fef5-49ec-8123-8e3cf02ab859_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-306463 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-306463 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-306463 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.542626109s)
--- PASS: TestAddons/parallel/LocalPath (53.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-smwnt" [cf2f1df4-c2cd-4ab3-927a-16595a20e831] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004736301s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-306463
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-vvgf7" [760398d8-bb5a-44a2-b788-de63c290c3fc] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004074536s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-306463 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-306463 addons disable yakd --alsologtostderr -v=1: (5.6771518s)
--- PASS: TestAddons/parallel/Yakd (11.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-306463
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-306463: (7.273216555s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-306463
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-306463
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-306463
--- PASS: TestAddons/StoppedEnableDisable (7.54s)

                                                
                                    
x
+
TestCertOptions (114.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-331722 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-331722 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m52.914154684s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-331722 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-331722 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-331722 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-331722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-331722
--- PASS: TestCertOptions (114.15s)

                                                
                                    
x
+
TestCertExpiration (307.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-333713 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-333713 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m28.221128249s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-333713 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-333713 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (37.055365535s)
helpers_test.go:175: Cleaning up "cert-expiration-333713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-333713
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-333713: (1.817747959s)
--- PASS: TestCertExpiration (307.10s)

                                                
                                    
x
+
TestForceSystemdFlag (46.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-652506 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-652506 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.15380486s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-652506 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-652506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-652506
--- PASS: TestForceSystemdFlag (46.35s)

                                                
                                    
x
+
TestForceSystemdEnv (77.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-156940 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-156940 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.234451084s)
helpers_test.go:175: Cleaning up "force-systemd-env-156940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-156940
--- PASS: TestForceSystemdEnv (77.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.45s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.45s)

                                                
                                    
x
+
TestErrorSpam/setup (45.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-034198 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-034198 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-034198 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-034198 --driver=kvm2  --container-runtime=crio: (45.491764727s)
--- PASS: TestErrorSpam/setup (45.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (6.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 stop: (2.387425228s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 stop: (2.039919085s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-034198 --log_dir /tmp/nospam-034198 stop: (1.913031431s)
--- PASS: TestErrorSpam/stop (6.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19598-5973/.minikube/files/etc/test/nested/copy/13121/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332452 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0910 17:46:35.171673   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:35.178532   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:35.189855   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:35.211184   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:35.252594   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:35.334088   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:35.495609   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:35.817246   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:36.459532   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:37.741167   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:40.303219   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:45.425048   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:46:55.666794   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:47:16.148534   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-332452 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m26.15080803s)
--- PASS: TestFunctional/serial/StartWithProxy (86.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332452 --alsologtostderr -v=8
E0910 17:47:57.110569   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-332452 --alsologtostderr -v=8: (40.283434858s)
functional_test.go:663: soft start took 40.284130086s for "functional-332452" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-332452 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 cache add registry.k8s.io/pause:3.1: (1.113425354s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 cache add registry.k8s.io/pause:3.3: (1.184693961s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 cache add registry.k8s.io/pause:latest: (1.101959605s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-332452 /tmp/TestFunctionalserialCacheCmdcacheadd_local4094849742/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 cache add minikube-local-cache-test:functional-332452
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 cache delete minikube-local-cache-test:functional-332452
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-332452
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.69243ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 kubectl -- --context functional-332452 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-332452 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332452 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-332452 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.534107299s)
functional_test.go:761: restart took 31.534215703s for "functional-332452" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-332452 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 logs: (1.39113357s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 logs --file /tmp/TestFunctionalserialLogsFileCmd336341274/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 logs --file /tmp/TestFunctionalserialLogsFileCmd336341274/001/logs.txt: (1.411746956s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-332452 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-332452
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-332452: exit status 115 (265.401973ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.199:30911 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-332452 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 config get cpus: exit status 14 (48.597616ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 config get cpus: exit status 14 (44.199107ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-332452 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-332452 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23840: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.85s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332452 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-332452 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.056373ms)

                                                
                                                
-- stdout --
	* [functional-332452] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:49:19.068040   23195 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:49:19.068227   23195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:19.068262   23195 out.go:358] Setting ErrFile to fd 2...
	I0910 17:49:19.068279   23195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:19.068479   23195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:49:19.069031   23195 out.go:352] Setting JSON to false
	I0910 17:49:19.069994   23195 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1911,"bootTime":1725988648,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:49:19.070074   23195 start.go:139] virtualization: kvm guest
	I0910 17:49:19.072214   23195 out.go:177] * [functional-332452] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:49:19.073583   23195 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:49:19.074219   23195 notify.go:220] Checking for updates...
	I0910 17:49:19.075927   23195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:49:19.077069   23195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:49:19.078254   23195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:49:19.079263   23195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:49:19.080344   23195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:49:19.081653   23195 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:49:19.082088   23195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:49:19.082125   23195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:49:19.098761   23195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0910 17:49:19.099236   23195 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:49:19.099870   23195 main.go:141] libmachine: Using API Version  1
	I0910 17:49:19.099905   23195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:49:19.100301   23195 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:49:19.100446   23195 main.go:141] libmachine: (functional-332452) Calling .DriverName
	I0910 17:49:19.100668   23195 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:49:19.101081   23195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:49:19.101125   23195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:49:19.116484   23195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I0910 17:49:19.117251   23195 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:49:19.117821   23195 main.go:141] libmachine: Using API Version  1
	I0910 17:49:19.117844   23195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:49:19.118157   23195 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:49:19.118350   23195 main.go:141] libmachine: (functional-332452) Calling .DriverName
	I0910 17:49:19.151997   23195 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 17:49:19.153060   23195 start.go:297] selected driver: kvm2
	I0910 17:49:19.153078   23195 start.go:901] validating driver "kvm2" against &{Name:functional-332452 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-332452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:49:19.153177   23195 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:49:19.155201   23195 out.go:201] 
	W0910 17:49:19.156473   23195 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0910 17:49:19.157521   23195 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332452 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332452 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-332452 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.262266ms)

                                                
                                                
-- stdout --
	* [functional-332452] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:49:11.823883   23003 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:49:11.823981   23003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:11.823990   23003 out.go:358] Setting ErrFile to fd 2...
	I0910 17:49:11.823994   23003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:11.824245   23003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 17:49:11.824773   23003 out.go:352] Setting JSON to false
	I0910 17:49:11.825822   23003 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1904,"bootTime":1725988648,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:49:11.825877   23003 start.go:139] virtualization: kvm guest
	I0910 17:49:11.828232   23003 out.go:177] * [functional-332452] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0910 17:49:11.829528   23003 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:49:11.829561   23003 notify.go:220] Checking for updates...
	I0910 17:49:11.831516   23003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:49:11.832722   23003 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 17:49:11.834144   23003 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 17:49:11.835327   23003 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:49:11.836668   23003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:49:11.838568   23003 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 17:49:11.839125   23003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:49:11.839182   23003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:49:11.854158   23003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44101
	I0910 17:49:11.854592   23003 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:49:11.855073   23003 main.go:141] libmachine: Using API Version  1
	I0910 17:49:11.855092   23003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:49:11.855371   23003 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:49:11.855563   23003 main.go:141] libmachine: (functional-332452) Calling .DriverName
	I0910 17:49:11.855770   23003 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:49:11.856037   23003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 17:49:11.856071   23003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:49:11.870559   23003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41763
	I0910 17:49:11.870931   23003 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:49:11.871323   23003 main.go:141] libmachine: Using API Version  1
	I0910 17:49:11.871344   23003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:49:11.871649   23003 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:49:11.871810   23003 main.go:141] libmachine: (functional-332452) Calling .DriverName
	I0910 17:49:11.903271   23003 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0910 17:49:11.904394   23003 start.go:297] selected driver: kvm2
	I0910 17:49:11.904403   23003 start.go:901] validating driver "kvm2" against &{Name:functional-332452 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-332452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:49:11.904520   23003 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:49:11.906595   23003 out.go:201] 
	W0910 17:49:11.907873   23003 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0910 17:49:11.909123   23003 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-332452 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-332452 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-f8nrw" [4d94fa2a-ec9e-431f-8d77-5996984b0fd6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-f8nrw" [4d94fa2a-ec9e-431f-8d77-5996984b0fd6] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.015712333s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.199:32286
functional_test.go:1675: http://192.168.39.199:32286: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-f8nrw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.199:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.199:32286
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [129e5675-14be-4c55-b85d-6b9948fb651b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003900165s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-332452 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-332452 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-332452 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-332452 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-332452 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e159f3cd-2be9-4fc8-9257-5c3994237e03] Pending
helpers_test.go:344: "sp-pod" [e159f3cd-2be9-4fc8-9257-5c3994237e03] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e159f3cd-2be9-4fc8-9257-5c3994237e03] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.004121664s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-332452 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-332452 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-332452 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ac0d0dd9-4b78-4140-8940-abbc46175ca4] Pending
helpers_test.go:344: "sp-pod" [ac0d0dd9-4b78-4140-8940-abbc46175ca4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ac0d0dd9-4b78-4140-8940-abbc46175ca4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003933526s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-332452 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh -n functional-332452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 cp functional-332452:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3222776749/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh -n functional-332452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh -n functional-332452 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-332452 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-nc8ck" [c851cd52-2e3f-4463-b665-410ca21deec2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-nc8ck" [c851cd52-2e3f-4463-b665-410ca21deec2] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004283482s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-332452 exec mysql-6cdb49bbb-nc8ck -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-332452 exec mysql-6cdb49bbb-nc8ck -- mysql -ppassword -e "show databases;": exit status 1 (137.911993ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-332452 exec mysql-6cdb49bbb-nc8ck -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.89s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13121/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo cat /etc/test/nested/copy/13121/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13121.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo cat /etc/ssl/certs/13121.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13121.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo cat /usr/share/ca-certificates/13121.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/131212.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo cat /etc/ssl/certs/131212.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/131212.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo cat /usr/share/ca-certificates/131212.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-332452 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 ssh "sudo systemctl is-active docker": exit status 1 (225.091006ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 ssh "sudo systemctl is-active containerd": exit status 1 (211.153229ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332452 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-332452
localhost/kicbase/echo-server:functional-332452
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332452 image ls --format short --alsologtostderr:
I0910 17:49:28.507536   24014 out.go:345] Setting OutFile to fd 1 ...
I0910 17:49:28.507647   24014 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:28.507655   24014 out.go:358] Setting ErrFile to fd 2...
I0910 17:49:28.507659   24014 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:28.507845   24014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
I0910 17:49:28.508359   24014 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:28.508489   24014 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:28.508847   24014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:28.508888   24014 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:28.523097   24014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
I0910 17:49:28.523521   24014 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:28.524075   24014 main.go:141] libmachine: Using API Version  1
I0910 17:49:28.524097   24014 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:28.524495   24014 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:28.524703   24014 main.go:141] libmachine: (functional-332452) Calling .GetState
I0910 17:49:28.526742   24014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:28.526774   24014 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:28.541739   24014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
I0910 17:49:28.542168   24014 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:28.542690   24014 main.go:141] libmachine: Using API Version  1
I0910 17:49:28.542711   24014 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:28.543080   24014 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:28.543217   24014 main.go:141] libmachine: (functional-332452) Calling .DriverName
I0910 17:49:28.543422   24014 ssh_runner.go:195] Run: systemctl --version
I0910 17:49:28.543448   24014 main.go:141] libmachine: (functional-332452) Calling .GetSSHHostname
I0910 17:49:28.547043   24014 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:28.547426   24014 main.go:141] libmachine: (functional-332452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:56:c7", ip: ""} in network mk-functional-332452: {Iface:virbr1 ExpiryTime:2024-09-10 18:46:19 +0000 UTC Type:0 Mac:52:54:00:32:56:c7 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-332452 Clientid:01:52:54:00:32:56:c7}
I0910 17:49:28.547496   24014 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined IP address 192.168.39.199 and MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:28.547716   24014 main.go:141] libmachine: (functional-332452) Calling .GetSSHPort
I0910 17:49:28.547863   24014 main.go:141] libmachine: (functional-332452) Calling .GetSSHKeyPath
I0910 17:49:28.548045   24014 main.go:141] libmachine: (functional-332452) Calling .GetSSHUsername
I0910 17:49:28.548207   24014 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/functional-332452/id_rsa Username:docker}
I0910 17:49:28.652317   24014 ssh_runner.go:195] Run: sudo crictl images --output json
I0910 17:49:28.810365   24014 main.go:141] libmachine: Making call to close driver server
I0910 17:49:28.810379   24014 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:28.810630   24014 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:28.810646   24014 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:49:28.810662   24014 main.go:141] libmachine: Making call to close driver server
I0910 17:49:28.810671   24014 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:28.812917   24014 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:28.812940   24014 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:49:28.812950   24014 main.go:141] libmachine: (functional-332452) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332452 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-332452  | 160cf3580c42c | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-332452  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332452 image ls --format table --alsologtostderr:
I0910 17:49:31.660261   24254 out.go:345] Setting OutFile to fd 1 ...
I0910 17:49:31.660388   24254 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:31.660399   24254 out.go:358] Setting ErrFile to fd 2...
I0910 17:49:31.660407   24254 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:31.660720   24254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
I0910 17:49:31.661547   24254 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:31.661688   24254 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:31.662255   24254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:31.662304   24254 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:31.678520   24254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
I0910 17:49:31.679019   24254 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:31.679669   24254 main.go:141] libmachine: Using API Version  1
I0910 17:49:31.679697   24254 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:31.680014   24254 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:31.680191   24254 main.go:141] libmachine: (functional-332452) Calling .GetState
I0910 17:49:31.682056   24254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:31.682100   24254 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:31.696966   24254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
I0910 17:49:31.697396   24254 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:31.697997   24254 main.go:141] libmachine: Using API Version  1
I0910 17:49:31.698030   24254 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:31.698396   24254 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:31.698579   24254 main.go:141] libmachine: (functional-332452) Calling .DriverName
I0910 17:49:31.698819   24254 ssh_runner.go:195] Run: systemctl --version
I0910 17:49:31.698846   24254 main.go:141] libmachine: (functional-332452) Calling .GetSSHHostname
I0910 17:49:31.701450   24254 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:31.701797   24254 main.go:141] libmachine: (functional-332452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:56:c7", ip: ""} in network mk-functional-332452: {Iface:virbr1 ExpiryTime:2024-09-10 18:46:19 +0000 UTC Type:0 Mac:52:54:00:32:56:c7 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-332452 Clientid:01:52:54:00:32:56:c7}
I0910 17:49:31.701837   24254 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined IP address 192.168.39.199 and MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:31.701966   24254 main.go:141] libmachine: (functional-332452) Calling .GetSSHPort
I0910 17:49:31.702123   24254 main.go:141] libmachine: (functional-332452) Calling .GetSSHKeyPath
I0910 17:49:31.702285   24254 main.go:141] libmachine: (functional-332452) Calling .GetSSHUsername
I0910 17:49:31.702407   24254 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/functional-332452/id_rsa Username:docker}
I0910 17:49:31.811488   24254 ssh_runner.go:195] Run: sudo crictl images --output json
I0910 17:49:31.889619   24254 main.go:141] libmachine: Making call to close driver server
I0910 17:49:31.889634   24254 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:31.889873   24254 main.go:141] libmachine: (functional-332452) DBG | Closing plugin on server side
I0910 17:49:31.889880   24254 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:31.889896   24254 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:49:31.889914   24254 main.go:141] libmachine: Making call to close driver server
I0910 17:49:31.889927   24254 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:31.890131   24254 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:31.890148   24254 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:49:31.890152   24254 main.go:141] libmachine: (functional-332452) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332452 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-332452"],"size":"4943877"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820
c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"160cf3580c42c319b358314c794f6fd5a62a85d9b485cf960891595f5373b9f1","repoDigests":["localhost/minikube-local-cache-test@sha256:4f203f9800edf34e733eceb788b14dbba3e4690976bb4effe9f1fc551edd82b0"],"repoTags":["localhost/minikube-local-cache-test:functional-332452"],"size":"3330"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e66
1e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v2
0240730-75a5af0c"],"size":"87165492"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"si
ze":"95233506"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"56cc512116c8f894f11ce1995460aef1
ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["regi
stry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332452 image ls --format json --alsologtostderr:
I0910 17:49:31.360447   24197 out.go:345] Setting OutFile to fd 1 ...
I0910 17:49:31.360574   24197 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:31.360585   24197 out.go:358] Setting ErrFile to fd 2...
I0910 17:49:31.360597   24197 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:31.360833   24197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
I0910 17:49:31.361456   24197 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:31.361562   24197 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:31.361935   24197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:31.361989   24197 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:31.376875   24197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
I0910 17:49:31.377320   24197 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:31.377834   24197 main.go:141] libmachine: Using API Version  1
I0910 17:49:31.377855   24197 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:31.378268   24197 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:31.378464   24197 main.go:141] libmachine: (functional-332452) Calling .GetState
I0910 17:49:31.380401   24197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:31.380464   24197 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:31.394316   24197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
I0910 17:49:31.394633   24197 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:31.395131   24197 main.go:141] libmachine: Using API Version  1
I0910 17:49:31.395159   24197 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:31.395484   24197 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:31.395658   24197 main.go:141] libmachine: (functional-332452) Calling .DriverName
I0910 17:49:31.395817   24197 ssh_runner.go:195] Run: systemctl --version
I0910 17:49:31.395845   24197 main.go:141] libmachine: (functional-332452) Calling .GetSSHHostname
I0910 17:49:31.398685   24197 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:31.399000   24197 main.go:141] libmachine: (functional-332452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:56:c7", ip: ""} in network mk-functional-332452: {Iface:virbr1 ExpiryTime:2024-09-10 18:46:19 +0000 UTC Type:0 Mac:52:54:00:32:56:c7 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-332452 Clientid:01:52:54:00:32:56:c7}
I0910 17:49:31.399026   24197 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined IP address 192.168.39.199 and MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:31.399186   24197 main.go:141] libmachine: (functional-332452) Calling .GetSSHPort
I0910 17:49:31.399326   24197 main.go:141] libmachine: (functional-332452) Calling .GetSSHKeyPath
I0910 17:49:31.399497   24197 main.go:141] libmachine: (functional-332452) Calling .GetSSHUsername
I0910 17:49:31.399615   24197 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/functional-332452/id_rsa Username:docker}
I0910 17:49:31.512439   24197 ssh_runner.go:195] Run: sudo crictl images --output json
I0910 17:49:31.608800   24197 main.go:141] libmachine: Making call to close driver server
I0910 17:49:31.608815   24197 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:31.609079   24197 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:31.609108   24197 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:49:31.609120   24197 main.go:141] libmachine: Making call to close driver server
I0910 17:49:31.609129   24197 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:31.609394   24197 main.go:141] libmachine: (functional-332452) DBG | Closing plugin on server side
I0910 17:49:31.609405   24197 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:31.609418   24197 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332452 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 160cf3580c42c319b358314c794f6fd5a62a85d9b485cf960891595f5373b9f1
repoDigests:
- localhost/minikube-local-cache-test@sha256:4f203f9800edf34e733eceb788b14dbba3e4690976bb4effe9f1fc551edd82b0
repoTags:
- localhost/minikube-local-cache-test:functional-332452
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-332452
size: "4943877"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332452 image ls --format yaml --alsologtostderr:
I0910 17:49:28.865855   24057 out.go:345] Setting OutFile to fd 1 ...
I0910 17:49:28.865944   24057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:28.865952   24057 out.go:358] Setting ErrFile to fd 2...
I0910 17:49:28.865956   24057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:28.866123   24057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
I0910 17:49:28.866630   24057 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:28.866717   24057 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:28.867043   24057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:28.867084   24057 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:28.882059   24057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
I0910 17:49:28.882446   24057 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:28.882984   24057 main.go:141] libmachine: Using API Version  1
I0910 17:49:28.883011   24057 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:28.883309   24057 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:28.883484   24057 main.go:141] libmachine: (functional-332452) Calling .GetState
I0910 17:49:28.885496   24057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:28.885542   24057 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:28.900026   24057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
I0910 17:49:28.900469   24057 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:28.900986   24057 main.go:141] libmachine: Using API Version  1
I0910 17:49:28.901026   24057 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:28.901367   24057 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:28.901562   24057 main.go:141] libmachine: (functional-332452) Calling .DriverName
I0910 17:49:28.901773   24057 ssh_runner.go:195] Run: systemctl --version
I0910 17:49:28.901810   24057 main.go:141] libmachine: (functional-332452) Calling .GetSSHHostname
I0910 17:49:28.904268   24057 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:28.904619   24057 main.go:141] libmachine: (functional-332452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:56:c7", ip: ""} in network mk-functional-332452: {Iface:virbr1 ExpiryTime:2024-09-10 18:46:19 +0000 UTC Type:0 Mac:52:54:00:32:56:c7 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-332452 Clientid:01:52:54:00:32:56:c7}
I0910 17:49:28.904651   24057 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined IP address 192.168.39.199 and MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:28.904758   24057 main.go:141] libmachine: (functional-332452) Calling .GetSSHPort
I0910 17:49:28.904946   24057 main.go:141] libmachine: (functional-332452) Calling .GetSSHKeyPath
I0910 17:49:28.905117   24057 main.go:141] libmachine: (functional-332452) Calling .GetSSHUsername
I0910 17:49:28.905262   24057 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/functional-332452/id_rsa Username:docker}
I0910 17:49:29.001455   24057 ssh_runner.go:195] Run: sudo crictl images --output json
I0910 17:49:29.064257   24057 main.go:141] libmachine: Making call to close driver server
I0910 17:49:29.064275   24057 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:29.064532   24057 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:29.064549   24057 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:49:29.064572   24057 main.go:141] libmachine: Making call to close driver server
I0910 17:49:29.064584   24057 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:29.064813   24057 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:29.064842   24057 main.go:141] libmachine: (functional-332452) DBG | Closing plugin on server side
I0910 17:49:29.064858   24057 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 ssh pgrep buildkitd: exit status 1 (196.509303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image build -t localhost/my-image:functional-332452 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 image build -t localhost/my-image:functional-332452 testdata/build --alsologtostderr: (4.12230492s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332452 image build -t localhost/my-image:functional-332452 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0cbb6158750
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-332452
--> 519b468d99c
Successfully tagged localhost/my-image:functional-332452
519b468d99cf7756f1b99c5821169f6b42ab695bd0076b8a5426276f67403310
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332452 image build -t localhost/my-image:functional-332452 testdata/build --alsologtostderr:
I0910 17:49:29.305530   24111 out.go:345] Setting OutFile to fd 1 ...
I0910 17:49:29.305673   24111 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:29.305683   24111 out.go:358] Setting ErrFile to fd 2...
I0910 17:49:29.305687   24111 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:49:29.305866   24111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
I0910 17:49:29.306366   24111 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:29.306859   24111 config.go:182] Loaded profile config "functional-332452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0910 17:49:29.307201   24111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:29.307248   24111 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:29.322696   24111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
I0910 17:49:29.323127   24111 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:29.323598   24111 main.go:141] libmachine: Using API Version  1
I0910 17:49:29.323620   24111 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:29.323927   24111 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:29.324140   24111 main.go:141] libmachine: (functional-332452) Calling .GetState
I0910 17:49:29.326146   24111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0910 17:49:29.326179   24111 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:49:29.340499   24111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34243
I0910 17:49:29.340885   24111 main.go:141] libmachine: () Calling .GetVersion
I0910 17:49:29.341392   24111 main.go:141] libmachine: Using API Version  1
I0910 17:49:29.341414   24111 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:49:29.341697   24111 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:49:29.341862   24111 main.go:141] libmachine: (functional-332452) Calling .DriverName
I0910 17:49:29.342053   24111 ssh_runner.go:195] Run: systemctl --version
I0910 17:49:29.342076   24111 main.go:141] libmachine: (functional-332452) Calling .GetSSHHostname
I0910 17:49:29.344428   24111 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:29.344826   24111 main.go:141] libmachine: (functional-332452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:56:c7", ip: ""} in network mk-functional-332452: {Iface:virbr1 ExpiryTime:2024-09-10 18:46:19 +0000 UTC Type:0 Mac:52:54:00:32:56:c7 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-332452 Clientid:01:52:54:00:32:56:c7}
I0910 17:49:29.344851   24111 main.go:141] libmachine: (functional-332452) DBG | domain functional-332452 has defined IP address 192.168.39.199 and MAC address 52:54:00:32:56:c7 in network mk-functional-332452
I0910 17:49:29.344972   24111 main.go:141] libmachine: (functional-332452) Calling .GetSSHPort
I0910 17:49:29.345146   24111 main.go:141] libmachine: (functional-332452) Calling .GetSSHKeyPath
I0910 17:49:29.345294   24111 main.go:141] libmachine: (functional-332452) Calling .GetSSHUsername
I0910 17:49:29.345426   24111 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/functional-332452/id_rsa Username:docker}
I0910 17:49:29.435409   24111 build_images.go:161] Building image from path: /tmp/build.508108020.tar
I0910 17:49:29.435465   24111 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0910 17:49:29.446151   24111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.508108020.tar
I0910 17:49:29.450638   24111 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.508108020.tar: stat -c "%s %y" /var/lib/minikube/build/build.508108020.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.508108020.tar': No such file or directory
I0910 17:49:29.450677   24111 ssh_runner.go:362] scp /tmp/build.508108020.tar --> /var/lib/minikube/build/build.508108020.tar (3072 bytes)
I0910 17:49:29.476799   24111 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.508108020
I0910 17:49:29.485907   24111 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.508108020 -xf /var/lib/minikube/build/build.508108020.tar
I0910 17:49:29.495183   24111 crio.go:315] Building image: /var/lib/minikube/build/build.508108020
I0910 17:49:29.495241   24111 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-332452 /var/lib/minikube/build/build.508108020 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0910 17:49:33.359802   24111 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-332452 /var/lib/minikube/build/build.508108020 --cgroup-manager=cgroupfs: (3.864531762s)
I0910 17:49:33.359886   24111 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.508108020
I0910 17:49:33.374405   24111 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.508108020.tar
I0910 17:49:33.384320   24111 build_images.go:217] Built localhost/my-image:functional-332452 from /tmp/build.508108020.tar
I0910 17:49:33.384348   24111 build_images.go:133] succeeded building to: functional-332452
I0910 17:49:33.384354   24111 build_images.go:134] failed building to: 
I0910 17:49:33.384379   24111 main.go:141] libmachine: Making call to close driver server
I0910 17:49:33.384395   24111 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:33.384638   24111 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:33.384651   24111 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:49:33.384658   24111 main.go:141] libmachine: Making call to close driver server
I0910 17:49:33.384665   24111 main.go:141] libmachine: (functional-332452) Calling .Close
I0910 17:49:33.384870   24111 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:49:33.384882   24111 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls
2024/09/10 17:49:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-332452
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image load --daemon kicbase/echo-server:functional-332452 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 image load --daemon kicbase/echo-server:functional-332452 --alsologtostderr: (1.391906828s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image load --daemon kicbase/echo-server:functional-332452 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdany-port4267097203/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725990539112581364" to /tmp/TestFunctionalparallelMountCmdany-port4267097203/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725990539112581364" to /tmp/TestFunctionalparallelMountCmdany-port4267097203/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725990539112581364" to /tmp/TestFunctionalparallelMountCmdany-port4267097203/001/test-1725990539112581364
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.464244ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 10 17:48 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 10 17:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 10 17:48 test-1725990539112581364
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh cat /mount-9p/test-1725990539112581364
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-332452 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7c3275a4-834a-427b-8974-e108fddab2e4] Pending
helpers_test.go:344: "busybox-mount" [7c3275a4-834a-427b-8974-e108fddab2e4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7c3275a4-834a-427b-8974-e108fddab2e4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7c3275a4-834a-427b-8974-e108fddab2e4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.003694114s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-332452 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdany-port4267097203/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-332452
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image load --daemon kicbase/echo-server:functional-332452 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image save kicbase/echo-server:functional-332452 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image rm kicbase/echo-server:functional-332452 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 image rm kicbase/echo-server:functional-332452 --alsologtostderr: (2.642491573s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.083452594s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-332452
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 image save --daemon kicbase/echo-server:functional-332452 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-332452
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdspecific-port2776764430/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T /mount-9p | grep 9p"
E0910 17:49:19.032414   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.400372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdspecific-port2776764430/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 ssh "sudo umount -f /mount-9p": exit status 1 (203.157214ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-332452 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdspecific-port2776764430/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-332452 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-332452 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jbbl4" [287f92f4-c692-4b8b-be30-88be43427cc9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jbbl4" [287f92f4-c692-4b8b-be30-88be43427cc9] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003331262s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdVerifyCleanup108526267/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdVerifyCleanup108526267/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdVerifyCleanup108526267/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T" /mount1: exit status 1 (321.477809ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-332452 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdVerifyCleanup108526267/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdVerifyCleanup108526267/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332452 /tmp/TestFunctionalparallelMountCmdVerifyCleanup108526267/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "209.695752ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "42.539007ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "207.759073ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.219357ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 service list: (1.362258231s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-332452 service list -o json: (1.240260566s)
functional_test.go:1494: Took "1.240400859s" to run "out/minikube-linux-amd64 -p functional-332452 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.199:32533
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-332452 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.199:32533
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-332452
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-332452
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-332452
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (188.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-558946 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0910 17:51:35.171420   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:02.874309   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-558946 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m7.695280691s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (188.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-558946 -- rollout status deployment/busybox: (3.454405642s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-2t4ms -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-szkr7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-xnl8m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-2t4ms -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-szkr7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-xnl8m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-2t4ms -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-szkr7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-xnl8m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-2t4ms -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-2t4ms -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-szkr7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-szkr7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-xnl8m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-558946 -- exec busybox-7dff88458-xnl8m -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-558946 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-558946 -v=7 --alsologtostderr: (51.771676436s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-558946 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp testdata/cp-test.txt ha-558946:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1374294018/001/cp-test_ha-558946.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946:/home/docker/cp-test.txt ha-558946-m02:/home/docker/cp-test_ha-558946_ha-558946-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m02 "sudo cat /home/docker/cp-test_ha-558946_ha-558946-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946:/home/docker/cp-test.txt ha-558946-m03:/home/docker/cp-test_ha-558946_ha-558946-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m03 "sudo cat /home/docker/cp-test_ha-558946_ha-558946-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946:/home/docker/cp-test.txt ha-558946-m04:/home/docker/cp-test_ha-558946_ha-558946-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m04 "sudo cat /home/docker/cp-test_ha-558946_ha-558946-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp testdata/cp-test.txt ha-558946-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1374294018/001/cp-test_ha-558946-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m02:/home/docker/cp-test.txt ha-558946:/home/docker/cp-test_ha-558946-m02_ha-558946.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946 "sudo cat /home/docker/cp-test_ha-558946-m02_ha-558946.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m02:/home/docker/cp-test.txt ha-558946-m03:/home/docker/cp-test_ha-558946-m02_ha-558946-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m03 "sudo cat /home/docker/cp-test_ha-558946-m02_ha-558946-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m02:/home/docker/cp-test.txt ha-558946-m04:/home/docker/cp-test_ha-558946-m02_ha-558946-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m04 "sudo cat /home/docker/cp-test_ha-558946-m02_ha-558946-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp testdata/cp-test.txt ha-558946-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1374294018/001/cp-test_ha-558946-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt ha-558946:/home/docker/cp-test_ha-558946-m03_ha-558946.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946 "sudo cat /home/docker/cp-test_ha-558946-m03_ha-558946.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt ha-558946-m02:/home/docker/cp-test_ha-558946-m03_ha-558946-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m02 "sudo cat /home/docker/cp-test_ha-558946-m03_ha-558946-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m03:/home/docker/cp-test.txt ha-558946-m04:/home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m03 "sudo cat /home/docker/cp-test.txt"
E0910 17:53:56.538069   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:56.544414   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:56.555768   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:56.577199   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:56.618607   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m04 "sudo cat /home/docker/cp-test_ha-558946-m03_ha-558946-m04.txt"
E0910 17:53:56.700801   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:56.862286   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp testdata/cp-test.txt ha-558946-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m04 "sudo cat /home/docker/cp-test.txt"
E0910 17:53:57.184341   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1374294018/001/cp-test_ha-558946-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt ha-558946:/home/docker/cp-test_ha-558946-m04_ha-558946.txt
E0910 17:53:57.826177   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946 "sudo cat /home/docker/cp-test_ha-558946-m04_ha-558946.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt ha-558946-m02:/home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m02 "sudo cat /home/docker/cp-test_ha-558946-m04_ha-558946-m02.txt"
E0910 17:53:59.108458   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 cp ha-558946-m04:/home/docker/cp-test.txt ha-558946-m03:/home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 ssh -n ha-558946-m03 "sudo cat /home/docker/cp-test_ha-558946-m04_ha-558946-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.458998667s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 node delete m03 -v=7 --alsologtostderr
E0910 18:03:56.537960   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-558946 node delete m03 -v=7 --alsologtostderr: (16.056398303s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (314.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-558946 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0910 18:06:35.174257   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:08:56.538002   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:10:19.603303   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:11:35.171820   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-558946 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m13.477415013s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (314.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-558946 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-558946 --control-plane -v=7 --alsologtostderr: (1m12.764910163s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-558946 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-480647 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0910 18:13:56.538519   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-480647 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.481082255s)
--- PASS: TestJSONOutput/start/Command (84.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-480647 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-480647 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-480647 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-480647 --output=json --user=testUser: (7.379712967s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-664059 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-664059 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.50035ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f9b5833b-a5b6-4d35-97a1-80a4718a0a7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-664059] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a585ed99-60ef-4045-b19f-a9fc4998953c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"21b177cf-07ef-454b-a10a-94ce469d5f3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"93801ceb-e646-4fc9-bb62-574f9508d52e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig"}}
	{"specversion":"1.0","id":"cad98964-0ae1-41f8-ba4f-ecdb6bfcb94d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube"}}
	{"specversion":"1.0","id":"95f00075-7662-4e09-867c-bfd6ef7ccc7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"27822a26-b1d7-4f25-8e87-ef2e5fccb687","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"126c86df-5f35-4796-8567-beb42e7e17d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-664059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-664059
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-609614 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-609614 --driver=kvm2  --container-runtime=crio: (41.777842835s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-611972 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-611972 --driver=kvm2  --container-runtime=crio: (43.081863116s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-609614
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-611972
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-611972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-611972
helpers_test.go:175: Cleaning up "first-609614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-609614
--- PASS: TestMinikubeProfile (87.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-495121 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-495121 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.657678939s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-495121 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-495121 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-507909 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0910 18:16:35.171019   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-507909 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.630567386s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507909 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507909 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-495121 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507909 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507909 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-507909
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-507909: (1.298107478s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-507909
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-507909: (21.209240488s)
--- PASS: TestMountStart/serial/RestartStopped (22.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507909 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507909 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-925076 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0910 18:18:56.538013   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-925076 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.988435075s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-925076 -- rollout status deployment/busybox: (3.270390542s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-gbtc6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-xqgf4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-gbtc6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-xqgf4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-gbtc6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-xqgf4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-gbtc6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-gbtc6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-xqgf4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-925076 -- exec busybox-7dff88458-xqgf4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-925076 -v 3 --alsologtostderr
E0910 18:19:38.238343   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-925076 -v 3 --alsologtostderr: (46.631959105s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-925076 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp testdata/cp-test.txt multinode-925076:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp multinode-925076:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2183249346/001/cp-test_multinode-925076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp multinode-925076:/home/docker/cp-test.txt multinode-925076-m02:/home/docker/cp-test_multinode-925076_multinode-925076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m02 "sudo cat /home/docker/cp-test_multinode-925076_multinode-925076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp multinode-925076:/home/docker/cp-test.txt multinode-925076-m03:/home/docker/cp-test_multinode-925076_multinode-925076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m03 "sudo cat /home/docker/cp-test_multinode-925076_multinode-925076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp testdata/cp-test.txt multinode-925076-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp multinode-925076-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2183249346/001/cp-test_multinode-925076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp multinode-925076-m02:/home/docker/cp-test.txt multinode-925076:/home/docker/cp-test_multinode-925076-m02_multinode-925076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076 "sudo cat /home/docker/cp-test_multinode-925076-m02_multinode-925076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp multinode-925076-m02:/home/docker/cp-test.txt multinode-925076-m03:/home/docker/cp-test_multinode-925076-m02_multinode-925076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m03 "sudo cat /home/docker/cp-test_multinode-925076-m02_multinode-925076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp testdata/cp-test.txt multinode-925076-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp multinode-925076-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2183249346/001/cp-test_multinode-925076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp multinode-925076-m03:/home/docker/cp-test.txt multinode-925076:/home/docker/cp-test_multinode-925076-m03_multinode-925076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076 "sudo cat /home/docker/cp-test_multinode-925076-m03_multinode-925076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 cp multinode-925076-m03:/home/docker/cp-test.txt multinode-925076-m02:/home/docker/cp-test_multinode-925076-m03_multinode-925076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 ssh -n multinode-925076-m02 "sudo cat /home/docker/cp-test_multinode-925076-m03_multinode-925076-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-925076 node stop m03: (1.452045027s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-925076 status: exit status 7 (410.730994ms)

                                                
                                                
-- stdout --
	multinode-925076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-925076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-925076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-925076 status --alsologtostderr: exit status 7 (412.112923ms)

                                                
                                                
-- stdout --
	multinode-925076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-925076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-925076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:20:16.185365   41777 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:20:16.185460   41777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:20:16.185468   41777 out.go:358] Setting ErrFile to fd 2...
	I0910 18:20:16.185472   41777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:20:16.185646   41777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:20:16.185801   41777 out.go:352] Setting JSON to false
	I0910 18:20:16.185825   41777 mustload.go:65] Loading cluster: multinode-925076
	I0910 18:20:16.185917   41777 notify.go:220] Checking for updates...
	I0910 18:20:16.186146   41777 config.go:182] Loaded profile config "multinode-925076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:20:16.186159   41777 status.go:255] checking status of multinode-925076 ...
	I0910 18:20:16.186526   41777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:20:16.186581   41777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:20:16.205546   41777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I0910 18:20:16.205982   41777 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:20:16.206587   41777 main.go:141] libmachine: Using API Version  1
	I0910 18:20:16.206617   41777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:20:16.206950   41777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:20:16.207112   41777 main.go:141] libmachine: (multinode-925076) Calling .GetState
	I0910 18:20:16.208630   41777 status.go:330] multinode-925076 host status = "Running" (err=<nil>)
	I0910 18:20:16.208650   41777 host.go:66] Checking if "multinode-925076" exists ...
	I0910 18:20:16.208944   41777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:20:16.208979   41777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:20:16.224053   41777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0910 18:20:16.224529   41777 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:20:16.224957   41777 main.go:141] libmachine: Using API Version  1
	I0910 18:20:16.224986   41777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:20:16.225271   41777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:20:16.225453   41777 main.go:141] libmachine: (multinode-925076) Calling .GetIP
	I0910 18:20:16.228360   41777 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:20:16.228745   41777 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:20:16.228774   41777 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:20:16.228912   41777 host.go:66] Checking if "multinode-925076" exists ...
	I0910 18:20:16.229247   41777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:20:16.229283   41777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:20:16.243963   41777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0910 18:20:16.244338   41777 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:20:16.244845   41777 main.go:141] libmachine: Using API Version  1
	I0910 18:20:16.244870   41777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:20:16.245128   41777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:20:16.245297   41777 main.go:141] libmachine: (multinode-925076) Calling .DriverName
	I0910 18:20:16.245451   41777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:20:16.245468   41777 main.go:141] libmachine: (multinode-925076) Calling .GetSSHHostname
	I0910 18:20:16.247752   41777 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:20:16.248121   41777 main.go:141] libmachine: (multinode-925076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:e1", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:17:35 +0000 UTC Type:0 Mac:52:54:00:5f:34:e1 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-925076 Clientid:01:52:54:00:5f:34:e1}
	I0910 18:20:16.248147   41777 main.go:141] libmachine: (multinode-925076) DBG | domain multinode-925076 has defined IP address 192.168.39.248 and MAC address 52:54:00:5f:34:e1 in network mk-multinode-925076
	I0910 18:20:16.248263   41777 main.go:141] libmachine: (multinode-925076) Calling .GetSSHPort
	I0910 18:20:16.248462   41777 main.go:141] libmachine: (multinode-925076) Calling .GetSSHKeyPath
	I0910 18:20:16.248615   41777 main.go:141] libmachine: (multinode-925076) Calling .GetSSHUsername
	I0910 18:20:16.248794   41777 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076/id_rsa Username:docker}
	I0910 18:20:16.329225   41777 ssh_runner.go:195] Run: systemctl --version
	I0910 18:20:16.335662   41777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:20:16.351120   41777 kubeconfig.go:125] found "multinode-925076" server: "https://192.168.39.248:8443"
	I0910 18:20:16.351156   41777 api_server.go:166] Checking apiserver status ...
	I0910 18:20:16.351194   41777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:20:16.365359   41777 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1102/cgroup
	W0910 18:20:16.374793   41777 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1102/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:20:16.374842   41777 ssh_runner.go:195] Run: ls
	I0910 18:20:16.378996   41777 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I0910 18:20:16.383364   41777 api_server.go:279] https://192.168.39.248:8443/healthz returned 200:
	ok
	I0910 18:20:16.383390   41777 status.go:422] multinode-925076 apiserver status = Running (err=<nil>)
	I0910 18:20:16.383402   41777 status.go:257] multinode-925076 status: &{Name:multinode-925076 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:20:16.383432   41777 status.go:255] checking status of multinode-925076-m02 ...
	I0910 18:20:16.383740   41777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:20:16.383774   41777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:20:16.398695   41777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42203
	I0910 18:20:16.399116   41777 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:20:16.399569   41777 main.go:141] libmachine: Using API Version  1
	I0910 18:20:16.399588   41777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:20:16.399854   41777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:20:16.400029   41777 main.go:141] libmachine: (multinode-925076-m02) Calling .GetState
	I0910 18:20:16.401350   41777 status.go:330] multinode-925076-m02 host status = "Running" (err=<nil>)
	I0910 18:20:16.401366   41777 host.go:66] Checking if "multinode-925076-m02" exists ...
	I0910 18:20:16.401640   41777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:20:16.401673   41777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:20:16.416925   41777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0910 18:20:16.417399   41777 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:20:16.417828   41777 main.go:141] libmachine: Using API Version  1
	I0910 18:20:16.417851   41777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:20:16.418123   41777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:20:16.418305   41777 main.go:141] libmachine: (multinode-925076-m02) Calling .GetIP
	I0910 18:20:16.421159   41777 main.go:141] libmachine: (multinode-925076-m02) DBG | domain multinode-925076-m02 has defined MAC address 52:54:00:4d:e6:b8 in network mk-multinode-925076
	I0910 18:20:16.421518   41777 main.go:141] libmachine: (multinode-925076-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:e6:b8", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:18:40 +0000 UTC Type:0 Mac:52:54:00:4d:e6:b8 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-925076-m02 Clientid:01:52:54:00:4d:e6:b8}
	I0910 18:20:16.421541   41777 main.go:141] libmachine: (multinode-925076-m02) DBG | domain multinode-925076-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:4d:e6:b8 in network mk-multinode-925076
	I0910 18:20:16.421651   41777 host.go:66] Checking if "multinode-925076-m02" exists ...
	I0910 18:20:16.421949   41777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:20:16.421991   41777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:20:16.436800   41777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0910 18:20:16.437155   41777 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:20:16.437633   41777 main.go:141] libmachine: Using API Version  1
	I0910 18:20:16.437656   41777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:20:16.437937   41777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:20:16.438095   41777 main.go:141] libmachine: (multinode-925076-m02) Calling .DriverName
	I0910 18:20:16.438236   41777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:20:16.438254   41777 main.go:141] libmachine: (multinode-925076-m02) Calling .GetSSHHostname
	I0910 18:20:16.440699   41777 main.go:141] libmachine: (multinode-925076-m02) DBG | domain multinode-925076-m02 has defined MAC address 52:54:00:4d:e6:b8 in network mk-multinode-925076
	I0910 18:20:16.441064   41777 main.go:141] libmachine: (multinode-925076-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:e6:b8", ip: ""} in network mk-multinode-925076: {Iface:virbr1 ExpiryTime:2024-09-10 19:18:40 +0000 UTC Type:0 Mac:52:54:00:4d:e6:b8 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-925076-m02 Clientid:01:52:54:00:4d:e6:b8}
	I0910 18:20:16.441095   41777 main.go:141] libmachine: (multinode-925076-m02) DBG | domain multinode-925076-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:4d:e6:b8 in network mk-multinode-925076
	I0910 18:20:16.441266   41777 main.go:141] libmachine: (multinode-925076-m02) Calling .GetSSHPort
	I0910 18:20:16.441427   41777 main.go:141] libmachine: (multinode-925076-m02) Calling .GetSSHKeyPath
	I0910 18:20:16.441573   41777 main.go:141] libmachine: (multinode-925076-m02) Calling .GetSSHUsername
	I0910 18:20:16.441702   41777 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5973/.minikube/machines/multinode-925076-m02/id_rsa Username:docker}
	I0910 18:20:16.524006   41777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:20:16.537755   41777 status.go:257] multinode-925076-m02 status: &{Name:multinode-925076-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:20:16.537793   41777 status.go:255] checking status of multinode-925076-m03 ...
	I0910 18:20:16.538084   41777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0910 18:20:16.538116   41777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:20:16.553584   41777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0910 18:20:16.553955   41777 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:20:16.554402   41777 main.go:141] libmachine: Using API Version  1
	I0910 18:20:16.554423   41777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:20:16.554663   41777 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:20:16.554822   41777 main.go:141] libmachine: (multinode-925076-m03) Calling .GetState
	I0910 18:20:16.556427   41777 status.go:330] multinode-925076-m03 host status = "Stopped" (err=<nil>)
	I0910 18:20:16.556439   41777 status.go:343] host is not running, skipping remaining checks
	I0910 18:20:16.556445   41777 status.go:257] multinode-925076-m03 status: &{Name:multinode-925076-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-925076 node start m03 -v=7 --alsologtostderr: (37.003673798s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-925076 node delete m03: (1.772339151s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (192.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-925076 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0910 18:28:56.538929   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:35.171670   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-925076 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m11.969123677s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-925076 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (192.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-925076
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-925076-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-925076-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (55.732365ms)

                                                
                                                
-- stdout --
	* [multinode-925076-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-925076-m02' is duplicated with machine name 'multinode-925076-m02' in profile 'multinode-925076'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-925076-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-925076-m03 --driver=kvm2  --container-runtime=crio: (39.092440111s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-925076
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-925076: exit status 80 (215.666008ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-925076 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-925076-m03 already exists in multinode-925076-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-925076-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.40s)

                                                
                                    
x
+
TestScheduledStopUnix (113.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-149604 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-149604 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.370203974s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-149604 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-149604 -n scheduled-stop-149604
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-149604 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-149604 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-149604 -n scheduled-stop-149604
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-149604
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-149604 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0910 18:38:56.538357   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/functional-332452/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-149604
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-149604: exit status 7 (63.981421ms)

                                                
                                                
-- stdout --
	scheduled-stop-149604
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-149604 -n scheduled-stop-149604
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-149604 -n scheduled-stop-149604: exit status 7 (63.496174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-149604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-149604
--- PASS: TestScheduledStopUnix (113.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (243.69s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.567330979 start -p running-upgrade-926585 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.567330979 start -p running-upgrade-926585 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.695851945s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-926585 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0910 18:41:35.171175   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-926585 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m53.361915926s)
helpers_test.go:175: Cleaning up "running-upgrade-926585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-926585
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-926585: (1.151678409s)
--- PASS: TestRunningBinaryUpgrade (243.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (148.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.749967387 start -p stopped-upgrade-358325 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.749967387 start -p stopped-upgrade-358325 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m40.337440989s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.749967387 -p stopped-upgrade-358325 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.749967387 -p stopped-upgrade-358325 stop: (2.129523488s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-358325 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-358325 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.979114497s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (148.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-642043 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-642043 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (98.821681ms)

                                                
                                                
-- stdout --
	* [false-642043] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:39:09.193403   49366 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:39:09.193817   49366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:39:09.193866   49366 out.go:358] Setting ErrFile to fd 2...
	I0910 18:39:09.193883   49366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:39:09.194330   49366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5973/.minikube/bin
	I0910 18:39:09.195096   49366 out.go:352] Setting JSON to false
	I0910 18:39:09.196032   49366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4901,"bootTime":1725988648,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 18:39:09.196086   49366 start.go:139] virtualization: kvm guest
	I0910 18:39:09.197957   49366 out.go:177] * [false-642043] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 18:39:09.199292   49366 notify.go:220] Checking for updates...
	I0910 18:39:09.199319   49366 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 18:39:09.200582   49366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 18:39:09.201787   49366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	I0910 18:39:09.203034   49366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	I0910 18:39:09.204296   49366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 18:39:09.205674   49366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 18:39:09.207479   49366 config.go:182] Loaded profile config "kubernetes-upgrade-192799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0910 18:39:09.207645   49366 config.go:182] Loaded profile config "offline-crio-174877": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0910 18:39:09.207749   49366 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 18:39:09.244203   49366 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 18:39:09.245349   49366 start.go:297] selected driver: kvm2
	I0910 18:39:09.245361   49366 start.go:901] validating driver "kvm2" against <nil>
	I0910 18:39:09.245371   49366 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 18:39:09.247177   49366 out.go:201] 
	W0910 18:39:09.248305   49366 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0910 18:39:09.249439   49366 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-642043 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-642043" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-642043

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-642043"

                                                
                                                
----------------------- debugLogs end: false-642043 [took: 2.969029053s] --------------------------------
helpers_test.go:175: Cleaning up "false-642043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-642043
--- PASS: TestNetworkPlugins/group/false (3.21s)

                                                
                                    
x
+
TestPause/serial/Start (98.59s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-459729 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-459729 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m38.589578631s)
--- PASS: TestPause/serial/Start (98.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-358325
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-358325: (1.017469041s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229565 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-229565 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (63.242852ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-229565] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (72.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229565 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229565 --driver=kvm2  --container-runtime=crio: (1m12.083228429s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-229565 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (72.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (4.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229565 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229565 --no-kubernetes --driver=kvm2  --container-runtime=crio: (3.712345429s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-229565 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-229565 status -o json: exit status 2 (211.013415ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-229565","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-229565
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (4.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229565 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229565 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.971535112s)
--- PASS: TestNoKubernetes/serial/Start (25.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-229565 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-229565 "sudo systemctl is-active --quiet service kubelet": exit status 1 (190.925122ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-229565
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-229565: (1.29712238s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229565 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229565 --driver=kvm2  --container-runtime=crio: (42.766927693s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-229565 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-229565 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.387867ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (134.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m14.331660387s)
--- PASS: TestNetworkPlugins/group/auto/Start (134.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0910 18:46:35.171183   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m14.39852542s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m15.177970243s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-642043 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-642043 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h58sg" [ba2db085-86e4-4ba3-9479-669598f936df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h58sg" [ba2db085-86e4-4ba3-9479-669598f936df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004569083s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rqwcd" [237a573c-833d-461e-a991-046726927c25] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00559564s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-642043 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-642043 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-642043 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-67tsk" [3e5d2b93-1f92-4d29-acf0-c7c154a34f88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-67tsk" [3e5d2b93-1f92-4d29-acf0-c7c154a34f88] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004965052s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.5687189s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-642043 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m33.896948261s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9hvwg" [4287752d-1eff-4a27-8f8e-a19209e30ec6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005330275s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-642043 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-642043 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tj2lb" [1f3d06e4-f4af-437d-92a1-992c0e05d643] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tj2lb" [1f3d06e4-f4af-437d-92a1-992c0e05d643] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004752541s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-642043 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m5.054318283s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-642043 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-642043 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lhnnj" [90ee7b0c-c3b6-4556-ac3d-34e1d945ad87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lhnnj" [90ee7b0c-c3b6-4556-ac3d-34e1d945ad87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004388644s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-642043 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-642043 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m28.820220776s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-642043 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-642043 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b92zh" [d57faa63-2f7a-4206-8c08-ce6dcefc3b42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b92zh" [d57faa63-2f7a-4206-8c08-ce6dcefc3b42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005728923s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-642043 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cm6zl" [f582363f-f50e-4845-8983-6df39fe4af0b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005954409s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-642043 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-642043 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g7hf5" [0b9a07c8-fa5f-453c-8b07-4d0eef31d484] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g7hf5" [0b9a07c8-fa5f-453c-8b07-4d0eef31d484] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004696562s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (119.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-347802 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-347802 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m59.180553638s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (119.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-642043 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-836868 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-836868 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m12.932250935s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-642043 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-642043 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tfn9f" [8587abe5-3022-449e-a730-eac0950c9f7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tfn9f" [8587abe5-3022-449e-a730-eac0950c9f7c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.134221611s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-642043 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-642043 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-557504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-557504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (54.085561258s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-836868 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5] Pending
helpers_test.go:344: "busybox" [13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [13271a5e-f6a8-4a3d-98b2-9eaf96b8dff5] Running
E0910 18:51:35.171703   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/addons-306463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004119448s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-836868 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-836868 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-836868 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.314475222s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-836868 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-347802 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8483124b-e9cd-4e09-b59f-e47a68cd90a7] Pending
helpers_test.go:344: "busybox" [8483124b-e9cd-4e09-b59f-e47a68cd90a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8483124b-e9cd-4e09-b59f-e47a68cd90a7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004978747s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-347802 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-347802 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-347802 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-557504 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d0a8517a-170a-406e-89f5-7cc376bb0908] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d0a8517a-170a-406e-89f5-7cc376bb0908] Running
E0910 18:52:04.781433   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:04.787800   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:04.799688   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:04.821027   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:04.862461   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:04.943916   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:05.106013   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:05.427944   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:06.070087   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:52:07.351392   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005024861s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-557504 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-557504 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-557504 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (637.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-836868 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-836868 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m37.376460261s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-836868 -n embed-certs-836868
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (637.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (608.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-347802 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0910 18:54:34.231602   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-347802 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m8.635068486s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-347802 -n no-preload-347802
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (608.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (566.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-557504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0910 18:54:41.357266   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:43.918852   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:44.472944   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:48.641251   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/auto-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:49.041171   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:59.283337   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:54:59.526030   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/kindnet-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:04.954687   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:05.005115   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/custom-flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:19.764645   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:38.986633   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:38.992963   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:39.004255   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:39.025611   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:39.067023   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:39.148450   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:39.309981   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:39.631689   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:39.731196   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/calico-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:40.273378   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:41.554952   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:44.117208   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:45.916018   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:49.239379   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:55:59.481259   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/bridge-642043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-557504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m25.780057933s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-557504 -n default-k8s-diff-port-557504
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (566.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-432422 --alsologtostderr -v=3
E0910 18:56:00.726657   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-432422 --alsologtostderr -v=3: (2.282714276s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-432422 -n old-k8s-version-432422: exit status 7 (64.483335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-432422 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-374465 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0910 19:19:23.979391   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/enable-default-cni-642043/client.crt: no such file or directory" logger="UnhandledError"
E0910 19:19:38.788779   13121 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5973/.minikube/profiles/flannel-642043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-374465 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (47.277546809s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-374465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-374465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.019276305s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-374465 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-374465 --alsologtostderr -v=3: (10.394525959s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-374465 -n newest-cni-374465
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-374465 -n newest-cni-374465: exit status 7 (63.363859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-374465 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-374465 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-374465 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (36.649989733s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-374465 -n newest-cni-374465
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-374465 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-374465 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-374465 -n newest-cni-374465
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-374465 -n newest-cni-374465: exit status 2 (231.476511ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-374465 -n newest-cni-374465
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-374465 -n newest-cni-374465: exit status 2 (223.611069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-374465 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-374465 -n newest-cni-374465
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-374465 -n newest-cni-374465
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
253 TestNetworkPlugins/group/kubenet 3
262 TestNetworkPlugins/group/cilium 3.05
268 TestStartStop/group/disable-driver-mounts 0.29
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-642043 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-642043" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-642043

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-642043"

                                                
                                                
----------------------- debugLogs end: kubenet-642043 [took: 2.863996901s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-642043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-642043
--- SKIP: TestNetworkPlugins/group/kubenet (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-642043 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-642043" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-642043

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-642043" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-642043"

                                                
                                                
----------------------- debugLogs end: cilium-642043 [took: 2.918949612s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-642043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-642043
--- SKIP: TestNetworkPlugins/group/cilium (3.05s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-186737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-186737
--- SKIP: TestStartStop/group/disable-driver-mounts (0.29s)

                                                
                                    
Copied to clipboard